Monday, March 31, 2008
In the interests of full disclosure, some of my caution certainly arises from having taken money from an individual that, in my personal opinion, is one of the most Machiavellian VCs in silicon valley. The stories I could tell would curl the hair on the back of your neck. As a side note, if Google had existed in 1999, given the public record, I never would have taken his money. Though I didn’t know it, his reputation had already hit the San Francisco Chronicle.
Nevertheless, both before that time and since then, I have become friends with several VCs who I would trust implicitly. Beyond my VC friends there are many others that I am sure are awesome people. So I don’t want to paint with too broad a brush. But I still say, beware.
The core of my argument is that founders and VCs interests are generally misaligned in several important ways.
First, for the founder, this business is probably the biggest single asset the person has. All the founder’s eggs are typically in this one basket. The VC operates a portfolio that allows for significant failure. His goal is not to have a portfolio with a collection of solid but not enormous businesses. The VC *needs* a homerun or two to balance out the law of averages that most companies fail. This means A VC may push a founder to swing for the fences when a safer play might be more likely to ensure survival.
Second, VCs tend not to respect or care about founders. There is an old joke within VC circles that the most important job a VC has is to replace the founder. Of course, once replaced, the likelihood that the founder will make any money, even in a scenario that might be a win for the VC, is very low.
Third, VCs generally don’t add that much value beyond cash. They will try to say otherwise. Unless they are famous for adding such value (check Google) they are probably lying. Aside from the fact that they are probably incapable of adding much value, even if they could, VCs are extremely busy managing their portfolio, meeting new companies, dealing with limited partners, etc. If you want your company to succeed, you will really have to do it yourself. Believing that a VC is going to add some incredible value that is going to help make your company is foolhardy. There are definitely exceptions to this rule, but they are few and far between. In short, founders should take the best financial deal they can get, from whatever source seems best, with the best available terms.
Finally, if you are a technical founder, be particularly wary because most VCs tend to think of techies as totally fungible, weird, and not capable of leading. This is often so, but often it is not. I think most of us would agree that Larry Page and Sergey Brin are and were always capable of leading Google. And yet, if you step back in time, people were from the beginning, skeptical of their leadership. They were perceived as smart but goofy. They probably never really needed Eric Schmidt, and yet it is totally conceivable that if they did not have the leverage that they had, that they would have been pushed out for not being “seasoned” enough. That certainly was the general industry perception at the time.
And So, I guess after reviewing my list I would just say to be careful out there. VCs can be good people and valuable resources too. But a bad one (and there are plenty of those) can really make you feel like, by comparison, anesthetic-free tooth extraction wouldn’t really be too bad at all.
Friday, March 28, 2008
I love abstractions.
The essence of the concept of an abstraction is a framework that simplifies how you think about and work in a given domain. Abstractions can be (and often are) argued against by suggesting that you don’t really need them. In computer programming, we didn’t need C because we had assembly language. We didn’t need C++ because we had C. We didn’t need Java because we had C++. To me these arguments (which people really made) were silly. I abandoned assembly language in the 90's.
The point is none of our existing abstractions are *needed*. But our human brains can only manage a certain amount of complexity at a time. Complexity is fine but only in bite size chunks. Abstractions are, in essence, a really generalized form of user interface. I think the reason I have written so much about user interface is that I think it is so important to figure out how to map complex things into representational models in such a way that more people can access them. Abstractions allow us to do just that with any process we are engaged in. Abstractions allow us to encapsulate complexity so that we don’t have to think about it and we can achieve greater and greater levels of complexity in an efficient way allowing us to keep more of the model of a given system in our heads.
And so the anti-abstraction argument rears its head in the RDBMS vs graph database debate. One of the arguments that the pro RDBMS folks make for why there is no need for the graph database model is that you can do everything that can be expressed in a graph database in a relational database. And there is some truth to this.
But there are two problems with this argument. The first is that this is only true in theory. It is not possible to build a graph database of scale using pure SQL – at least with the SQL tools that we currently have to choose from.
One reason for the scale problem is the only way to do it is to do what are called self-joins. This is when you join a table to itself. Conceptually seem just fine. But the problem is that it is impossible for the database engine to do anything other than brute force un-optimized traversals of the graph when confronted with a chain of self-joins. In other words, using this technique will not yield a useful database that is query-able at any kind of scale. Handling certain aspects of providing a graph database model requires some very specific and different kind of thinking and optimizations from those that go into designing an SQL database.
Another problem is that one giant table using self joins for traversal means a huge write bottleneck. Yes, you can avoid that with sharding depending on your design, but it is definitely not part of the SQL model, and so you can’t say SQL is helping you here.
The second and I believe more important argument against implementing a graph data model using SQL is that even if SQL could do a good job of representing a graph model, building your graph system in SQL is not a very good abstraction. The truth is that most of the kinds of things we want to do in app development look more like graph than relational structures. Graphs are elemental to computer science because most interesting algorithms and in fact real world data models can be very naturally thought of as a graphs. Graph theory is (if things are as they were when I was in school) the first thing you learn when you begin studying computer science, and there is very good reason for this. The fact that Facebook was able to anchor the idea of what they were building as a “social graph” is an incredible testament to the innately natural characteristics of the graph concept.
So if you are representing a graph, you really want an API that reflects the unique and useful characteristics of a graph. In other words, you want an abstraction that reflects how you really think about the data and not some jury-rigged representational model continuously intruding itself into your thought process. And so, having a data store that allows us to express our data in a way that is much more similar to how we actually think is enormously helpful.
And such is the case with attempting to implement a graph database using SQL. You can do it, but it is unlikely to work very well, and because you don’t have the benefit of the abstraction, it actually adds to the complexity of the design instead of simplifying it.
The bottom line is that graphs are a better representational model when the structure of your system will change frequently. Relational is a better model when the structure will be static. Today, I think most of us are not building applications that are ideally structurally static.
Because most applications today have a much more dynamic nature, graphs are, for most people, under most circumstances, a far better abstraction. And to me, there is little in this world more powerful and satisfying than a great abstraction.
Thursday, March 27, 2008
In any case, the point I want to make here is that many people are saying that relational does not necessarily mean SQL. And indeed this is true. Relational refers to what really is a mathematical framework for mapping tables of data.
However, certainly in colloquial terms, relational database *does* mean SQL. There are precious few people that, when they think of solving a multi-table problem do no think of MySQL, Postgres, Oracle, et. al.
Perhaps it is foolish that people think this way, but I am not familiar with any other category of commercial grade relational products that are on par with these types of SQL without being SQL. Note, I am not saying that they do not exist. Though I am curious if they do exist, why so few people know of or use them.
In that regard, over at Hacker News, there has been a *great* discussion about these issues. One commenter, edw519 provided a list of links to products that are relational but as he describes them are not rigid in the traditional SQL database kind of way, that perhaps address some of the issues I am raising. I have listed the url’s below.
It would be great to discuss some of these products. I will be taking a look over the next few days, but since there seem to be so many of you that “get” what we are talking about here, I would love to hear in comments some of the pros and cons of these referenced applications.
Also note that there are quite a few products that I know *do* do what we are talking about in the Death of the Relational Database article. However none of these are relational. So although we are creating a product in this general space, we are not the first people to think that this area is important, and my intent here is to also discuss other approaches with some combination of academic rigor, and conversational practicality. In the coming weeks I will try to begin conversations about some of the products I am aware of as well as some of the things people are working on that I have been made aware of through this blog.
And so this is a call to action for all relational != SQL zealots and anyone else that wants to weigh in. How far do relational tools go without SQL, and are they scalable, friendly and generally useful? Most importantly, do they solve the same kinds of problems I describe in Death Of The Relational Database?
Wednesday, March 26, 2008
Some of yesterday’s responses to the article have brought to mind a phenomenon that I first observed in college. When people fear some new technology will change the nature of and potentially the value of their high priest status, they react negatively to that possibility.
The first time I observed this phenomenon in 1984, when the Mac was introduced. I was at The University of Pennsylvania, which was one of the twenty or so schools that Apple had partnered with. As a result there were a lot of us Mac guys around. But I was a CSE student at the Moore school of Electrical Engineering. All of our facilities were in the electrical engineering school including the computer lab. And so you can imagine this was an extraordinarily geeky place.
The guys running the computer labs were all Unix guys. They hated the Macs. In fact they hated the *idea* of the Mac. The idea that something could be friendly or easy was offensive to them. The idea that anyone would or could use anything other than a command line was incomprehensible. Graphical user interfaces sucked! Arguments included the idea that if you didn’t know what you were doing you really shouldn’t be using a computer.
Of course now that just sounds stupid.
The next really big revolution of this sort that I can recollect was when Aldus introduced PageMaker and the whole concept of desktop publishing. In this case it was the graphic design community that suggested that desktop publishing was not a good thing. Making it too easy meant there would be all these horrible designs out there from people that didn’t know what they were doing. Heaven forbid!
Once again, today, that just sounds stupid.
I could go on, but the point is these cycles will never go away. The digital photography revolution yielded the same resistance. The experts are always telling us how dumb we are for seeking simplifications and abstractions for things that they are expert at. The cycle repeats itself with every innovation. Paradigm shifts are always perceived as a threat to the status quo. And sometimes they actually are. And so the arguments are always:
- It’s not powerful enough.
- There’s no need.
- It’s dangerous.
But the underlying psychological framework is really a fear of irrelevancy. If you make things too simple my expertise will be less important. *I* will be less important.
And so, as I have made the argument that for Web 2.0 applications, the graph model is far more effective than the relational model, the “high priests” are coming out of the woodwork. The arguments are the same as they always are: not powerful enough, unnecessary, and dangerous.
So as I read the comments yesterday, one theme that emerged was that anything that you can do with a graph you can do with a relational database – and the relational model is more powerful. This of course is very similar to the gui/command line argument. Why make something easy? You can do all that with a command line!
But I must say my favorite is always when you get to the “dangerous” argument. That is an inflection point. When you get to the dangerous argument, you are being honored by an unwitting concession speech. It is *always* the sign you have won. And around 5pm est. it happened. One of the high priests lobbed the long ball. “Managing a database without a DBA is like letting children drive trucks.”
Interception. Touchdown. Game Over.
Tuesday, March 25, 2008
The subject of the event was the future of mobile video. One of the topic's that came up was monetizing mobile video. So there were five panelists, including VP Content & Business Development – Mobile, NBC Universal, USA, and more importantly, at least for this article, Loren Feldman from 1938 Media.
Loren Feldman. Tech Nigga
Now I am not going to get into the merits of Loren's perspectives, which you can imagine I might take umbrage with. They have been dissected pretty well here. Here are a couple more videos that do a good job of providing context for the kind of reputation Feldman has developed for himself.
Loren Feldman: Black People Can't Get It together
Loren Feldman: Black People Are Lame
In any case, regardless of what you think of Feldman's perspective, a bigger issue leapt front and center. It was odd watching, as Loren sat, normalized by the credibility of the other panel members. That oddness was heightened as the discussion turned to how mobile video was going to be monetized.
The loop that kept running through my head was, who in the world is going to advertise with Loren Feldman? Can you imagine... the Intel "tones" followed by "Tech Nigga, brought to you by Intel." Or perhaps the McDonalds logo with a voice over, "Tech Nigga, I'm Lovin' it."
I am sure some of you will rightly accuse me of a failure of imagination. But somehow, I'm just not seeing it.
Monday, March 24, 2008
You save everything, or at least you try. And when you do it, you feel like you’re really doing something good. But deep down there is another voice that speaks a truth that you would prefer not to acknowledge. Every time you implicitly or explicitly save something you know that may be the end of that bit of information forever. Innately you understand that most information loses its value the minute you save it. This is because, while you can save the data, it’s almost impossible to save the context.
By context I mean how it relates to the rest of your universe. For example, if I save a Word document, there is no way to indicate that it relates to a person or an email, or a Delicious tag, or anything else. A big part of the problem is that all of our data is in separate silos. Applications – be they web apps, or desktop apps – tend only to know about the data that they create. Its what I call “silo-ization.”
One solution to this problem might be some kind of intelligent application that can read all the stuff you create and figure out that certain thinks are related. And certainly I imagine that at some point we will have applications like that. But on a more basic level, we almost need a kind of universal framework to allow for manual or automated connection of heterogeneous objects. In other words, the “smarts” to help us connect things is of no use if there is no acceptable *way* to connect them. You’ve got to be able to connect before you can think about what to connect.
Allowing for the creation of context across data silos means that every piece of information that is important to me can be placed in the context of every other piece of information that is important to me. So if I look at a document, I’d like to be able to see who sent it to me or whom I sent it too. I’d like to be able to see that I tagged that document with the same tags as a bunch of web pages that are somehow related. I’d like to be able to see that the person who sent it to me has accepted an invitation to a meeting that I will also be attending.
The point is that almost every piece of information we collect has a trail to other bits of information. Right now we can’t see those trails. And so, our data spaces, be they our Gmail archives, or our hard drives, or our Delicious tags, etc., are really more like old attics with years of junk covered in thick dust. Modern software technology can and should do better.
Friday, March 21, 2008
Unfortunately, we do not have the same kind of technologist community as other geographies such as Silicon Valley, and while there are many many technologists in the New York area, in my view there is not sufficient connectedness among us. Many of you are buried inside much larger institutions such as investment banks, and other organizations for which technology is more of a necessary means to an end, than something in and of itself to be interested in or excited about.
So today, I am proposing the creation of a group called the Geek NY.
Geek NY will be modeled on several ideas that I have seen work, and that I think could be effective. The anchor of the concept, which I hope can be similar to the NY Tech Meetup, is a monthly meeting where one or two people will present a technical idea or issue that would be of interest to the technology community at large. An example might be someone coming in to talk about the concept of map/reduce as a new paradigm for programming.
The idea is, at the monthly meetings, to offer presentations that are broadly valuable to the technologist community. Presenters will be a mix of local technologists, and presenters from the corporate or academic world. However, the idea would never be to promote specific products, but more to discuss new ideas, processes and thinking.
Once we begin to improve connectedness, I believe there will be an opportunity to tackle some of the broader issues that keep New York from really being on the map as a great place to create technology and to build technology companies.
Several months ago we formed the New York Tech Boosters mailing list, which I would encourage everyone to join, as it will be the starting point for all discussion about Geek NY. If you are in the Tri-State area, join us.
Thursday, March 20, 2008
The brilliance of iPhone is in its politics, and that credit must go personally not just to Apple as a company, but to Steve Jobs as an individual. Steve understood intuitively that Apple could do something that perhaps no other company in the world could do, which is to shift the dynamic between the handset manufacturer and the carriers by delivering a useful product in a cesspool of cell phone crap.
Pre iPhone, if you had had any conversations with handset manufacturers about why things were the way that they were, why phones sucked so badly, they would blame it on the carrier. They would say, the carriers dictate this, or the carriers won’t let us do that, and that is why we suck.
The real truth is that the dynamic between carriers and handset makers was focused on the dynamic between handset makers and carriers. They were focused on the minutia of existence. They were focused on the next feature they wanted in the next quarter that could perhaps generate the next 1% increase in carrier revenue. They were focused on Joe Blow getting fired and who is the new guy managing handset relations. The one thing neither side was focused on was how crappy their products were. But it was all too apparent to the customers.
Of course it wasn’t just a horrible carrier/handset manufacturer dynamic. The carriers did have lots of control, but that did not stop Microsoft or Palm for that matter from making fairly open platforms with little carrier interference. They did what they wanted. They just sucked regardless, and you really can’t blame the carriers for that. Palm and Microsoft were just massively incompetent. Palm to this day, hasn’t been able to ship a real OS on their phones. This is something they have been working on for *years*. The technical leaders at Palm are (or at least have been) idiots. At Microsoft, they just suck at everything until Apple shows them how to do it.
But the bottom line is anyone who actually used a phone over the last few years knew they sucked, and intuitively knew that they could be far better. This is why, for years, people have been anticipating the iPhone. Not because such a product was unimaginable, but because cell phones were so bad that something that could blow away the status quo was *easily* imaginable.
What Apple did was to rethink everything and, presuming a clean slate, design something that would actually be useful. The willingness to rethink, was brilliant, and risky. I think the actual design could easily have come from any of the top flight design firms like Frog or IDEO. But Jobs actually believed, actually *knew* that such a product could be delivered to the market, despite the perceived politics and carrier dynamics, and that was the revolution.
Not being in the market to start with, which some pundits viewed as a disadvantage, was actually Apple’s greatest strength. They were unafraid to uncompromisingly deliver a radical shift in the market. And they were willing to Apply their substantial engineering and design resources to the task. Their outsider perspective allowed them to think *purely* in terms of what the world should look like – not the way it is. Correction – not the way it *was*. At the end of the day, Apple changed the market forever, in part because Steve had a little bit of vision, but more importantly because he had a pair of big brass balls.
Wednesday, March 19, 2008
Since that time, I have discussed and commented a lot about phones, and the various players, and one thing has become incredibly clear. There is a huge divide between Europe and the US. Europe loves Nokia. They have 40% world wide market share, and almost none of that comes from the states. This is particularly true in smart phones, where none of the top of the line Nokia phones are available through carrier deals.
What all this means is that Europe loves Nokia, and we in the U.S. are relatively unfamiliar.
I am not a Nokia user, but I am familiar with the OS in the same way as I am familiar with the iPhone, which is to say as someone who has read and discussed a lot about the platform and played with one, but does not own one.
But it is clear to me that the iPhone, despite its failings relating to background processing and other complaints, is a far better platform than the Nokia’s Phones and Symbian OS. But don’t tell that to a Nokia user. I posted a comment on Engadget about the risk that Nokia was under regarding the competitive threat of iPhone and other software focused companies, and was savaged by European Nokia zealots. People bring all of their nationalistic or pro continent baggage and are unable to engage in rational discussion about the issue. Its very much like criticizing Apple. There is a sizable contingent of people that have no interest in hearing anything negative about them. And they are similarly aggressive and vocal.
And so I wonder whether what all this means is that Nokia will be viewed in Europe as a European company, and as such more worthy by continent partisans. I don’t believe those of us in America like Apple products because they are American. After all, Microsoft is American as well. But for Nokia, there is no other European competitor and so, in many respects the zealous support may reflect a certain Pan European anti-Americanism.
As this battle gets more heated and Nokia becomes more fully engaged in building their next generation platform, it will be interesting to see how these tensions play out.
Tuesday, March 18, 2008
However, according to John Gruber at daringfireball.net:
As a postscript on the “no background apps” policy, a source confirmed to me that the iPhone AIM client AOL demoed during the iPhone Roadmap event does not cheat by continuing to run in the background — it quits when you switch to another app, but doesn’t log you out of AIM automatically. Such a client can’t notify you of IM messages from the background (a la the way the iPhone notifies of you SMS messages), but when you switch back to the AIM app, messages you missed should appear. Be wary of claims that “An app that does X is impossible without background processing.”
I trust John implicitly on his Apple insider info. This makes me *wrong*.
I also said in my follow up, that this was AOL and that as such, anything was possible, and so they just might do something this lame. But to tell you the truth I really didn’t believe that. It was and is inconceivable to me that AOL would bring perhaps their most important franchise to the iPhone in such a way that it would be little more (in fact perhaps a little less) useful than *non-push* email.
If even the big name partners will be prevented from making useful communications products that actually do things like notifying you of incoming messages – let alone any other interesting communications related applications – the implications are staggering. I just still have a hard time believing this is so.
I do think, though, that John’s conclusion that this demonstrates one should be wary of people (that would be me) saying you can’t do X on the iPhone because of background processing is quite off. You have no need to be wary of me. I am telling the truth about the significance of no backgrounding, and John is demonstrating that fact.
What this really suggests is that Apple is going to prevent *everyone* from doing communications products in a very evenhanded – perhaps ham-handed – way. They are OK with the entire third party communications app category failing – at least for a while. What was to me a huge downside for developers outside the velvet rope, has just become an astonishing downside for all iPhone users. And if I want instant in my instant messaging, I better stick to my old school Blackberry.
In truth, I think I preferred the situation where Apple was just lying. Because AOL Instant Messenger without instant messages – that totally sucks.
Monday, March 17, 2008
The “Apple’s iPhone SDK inhibits Mobile Innovation” article has generated an enormous response. Specifically 4000+ people have read the article since it was posted on Thursday. To put that in context that was just a bit under my unique visitor count for all of Feburary. The piece was linked to by the dean of the Macintosh commentariat John Gruber at daringfireball.net as well as Hacker News (news.ycombinator.com), dzone, and a bunch of other websites and blogs.
The comments have generally been either in agreement, or in the case of John Gruber, largely but not totally in agreement. Even in his case it does appear that I effected his thinking on the subject. As of this writing, the dzone website has the article listed with 14 vote ups and one vote down. And so my mission of educating people on this issue and changing some minds has been largely achieved here and so I want to thank you guys for spreading the word.
There were 33 comments, the most ever for this blog, and while I can’t respond to all of them, I want to thank everyone who took the time to write. I did also want to pick a few out and address them in a post since others might have had similar thoughts or questions.
Interesting write up but I do take exception in thinking there are some problems that can ONLY be solved with Andorid (or some brand XYZ OS). In 25+ years of programming, I have found that background processing is simply one tool (and in the grand scope of things a very small tool) in the overall tool chest.
My Response: I did not say, and do not believe that some problems can only be solved with Android. I believe two things:
- At the present time, Android is the *easiest* platform to do the things we are doing. This relates to communications, which requires background processing, and location based apps, because Android understand maps at the OS level. The mapping stuff is not core to the argument here. I provide it only as background.
- Background processing is absolutely required to do *communications* apps. If you're doing Excel on the desktop or Doom on the hand held, you will not have a problem.
You claim that resource management problems for background third party apps have already been solved in real time systems. Frankly, I'm calling BS on that. I can not think of a single real time system that is open to background process third party apps (already a very rare beast), that has created a good solution to this. Indeed, the only example that I can think of (WinCE), is generally considered a textbook case of why this sort of programming is a bad idea for real time systems.
My Response: Hmm… I love it when people quote you as saying something you didnt say, and then attack you for it.
I didnt say this: "You claim that resource management problems for background third party apps have already been solved in real time systems."
What I said was that the problem of background tasks running safely has been resolved. But more importantly I think maybe you are not clear on the definition of a real time system. For a full definition read here. Phones actually do not qualify. Real time applications are applications where the response time must be guaranteed. This has typically been the case in the embedded software world. Probably the most commercially successful RTOS is VxWorks. Windows CE is most definitely *not* an RTOS and is never sold as such.
Moreover, I did not say that any RTOS supported third party applications. What I am saying is that many platforms have demonstrated the ability to finely control the amount of processing power, memory, etc that an piece of code can use. These techniques are *totally* transferable, given that a phone is a far less critical than some of the environments where these kinds of things are often run.
In this particular case, the types of background tasks we are talking about could be scheduled to run at perhaps 1/1000th of the available processing power, which would essentially be negligible. This is just not hard.
Comparing Android, which is not actually available on any phone, to the iPhone SDK, which is a beta, seems a little odd to me. Who knows how Android will behave when available on an actual product.
My Response: As it is, the Android SDK *is* better for communications. The reason is clear. You can’t do communications *AT ALL* with the iPhone based on the current spec. So on that front, you are right, there is no comparison.
Actually, the AIM example was perfectly valid. You can use AIM for chatting when your want to chat. If you exit the app, sure now your chat client is no longer running. But nothing in the demo or in the discussion implied that it would--only you assumed that it had to be real-time, constantly on, and behave identically to a desktop equivalent. But it doesn't have to, and for many, if not most, users, the ability to use AIM for quick chats is a huge win even if the live notification is not present.
My Response: Ok so if you are on your iPhone, do you appear in your friends buddy lists as available? If not then you are saying the iPhone would be the first outbound only AIM client. If so, then what happens when in the 99.9% case your phone is inactive, or doing something else and does not receive your IM. I cannot imagine AOL presenting such a design as a real AIM client. But indeed it is AOL and perhaps stranger things have happened. If AOL does introduce such a lame turd for their iPhone AIM client, you and others who suggested the same will be right and I will be wrong.
"Our application would not be easily possible on any platform other than Android."
Any? What about platforms other than Android or iPhone? Would it be harder on OpenMoko, for example?
My Response: OpenMoko may end up being a fine system, but it is far more bare bones than Android. And so while everything we are doing could indeed be done on OpenMoko or Qtopia for that matter, it would not be “easy”. Of course this is indeed better than the iPhone, where these sort of communications based applications are impossible.
You as a developer may care about the lack of this feature in the sdk at this time but I highly doubt that the majority of the customers for this device do.
My Response: End users don’t know they want something until they see it and understand its usefulness. This is the way innovation works. People didn’t know they needed PCs, or spreadsheets, or graphical user interfaces, until they saw them. With background processing enabled, developers will create innovative apps that users will care about very much.
Friday, March 14, 2008
It was a revolution. We had all had a tantalizing taste of graphical interfaces through Apple's $10,000 Lisa computer. But it seemed to most of us more like some unattainable device from a science fiction movie than something any of us could actually own one day. No one imagined it could really be brought down to $2,499. The introduction of the Mac was an extraordinary moment in the history of computers. At the time there was no Microsoft Windows, only MS-DOS. And there would not be a usable Windows for years.
We all thought Apple would rule the world. But it did not work out that way.
Apple could not figure out how to sufficiently broaden the appeal of its products outside its core audience. Part of the problem was philosophical. Apple had no interest in powering hardware from other manufacturers. Part of the problem was resources. When Microsoft finally did get Windows together, there were dozens of companies making Windows compatible hardware. They hit every possible price point, form factor and configuration. They hit every form of distribution. They hit every possible marketing channel. It is very hard for one company to compete with dozens, and to end up being the majority product. Ultimately Microsoft trounced Apple by outflanking it.
Transport to present. Enter the iPhone.
The same outlines are repeating themselves in the phone market. This time its Apple's iPhone vs. Google's Android. Apple's iPhone is "slicker". It is more polished. It has personality. But Apple has never been interested in playing nice to build its market. The iPhone is and will always be on Apple's own private island. There will be no "iPhone compatible" phones from other vendors. There will be no iPhone with a keyboard. There will be no iPhone for Verizon, or Sprint, or T-Mobile. There will be no $100 iPhone. As Henry Ford said, you can have your car in any color... as long as it's black.
And so, Android is the anti-iPhone. Google offers an *open* operating system. They play nice. They provide features that developers ask for. Google *wants* hardware manufacturers to use their platform. They have made compromises to the design to support different form factors, and this will make it less slick. As with Windows, Android sacrifices design in service of ubiquity. History has shown this is a great business decision.
There are dozens of companies out there that want to play and too much territory for one company to cover, let alone dominate. There are just too many price points, too many complicated distribution issues, and too many form factors. More than a billion phones a year are sold and even in the most optimistic of scenarios, Apple will only be able to sell a small slice of those. So, while there is a lot of well deserved excitement about the iPhone, people should be clear that it is almost structurally impossible for Apple to have the dominant phone platform.
Of course none of this means that the iPhone will not be wildly successful, just as the Mac is. It just means that it will ultimately not be dominant. The iPhone is already is a huge hit, and in the near future it will be a big part of Apple's revenue mix. If Jobs' goal is to make lots of money for Apple, as it should be, the strategy is sound.
But despite what I expect to be a huge and growing financial success, Android will be the Microsoft Windows, to Apple's Macintosh. Android will hit every conceivable mix of features and channels. Google will court developers, while Apple will fight with them, as they always have.
And in the end the more things change, the more they stay the same. When industry historians look back at the history of this era, 2008 will look very much like 1984.
Thursday, March 13, 2008
Unfortunately, some of the details we have discovered about the SDK will have a real impact on the ability of developers to innovate on the iPhone. The issue is that Apple has set a policy that third party application developers can't create applications that run in the background. Currently, there is a minor uproar in the blogosphere about Apple’s policy on this matter.
The purpose of this article is to convince all of you that the uproar should not be minor.
Background Processing is the Key to Mobile Innovation
Prohibiting background processing is not just a question of one feature being left off a long list of otherwise very well executed features. The issue of background processing is *the* issue for a mobile device because it is key to two things:
- telling the world about your status in some ongoing way
- receiving notification of important events
These two things are the key to most new real innovations in the mobile space. To be clear, by innovation, I mean creating functionalities that have not been possible before. I do not believe that Apple’s beautiful new iPhone UIs or visual metaphor’s allows for the creation of truly new application categories. Apple’s tools are great, but they just make apps that were already possible, easier to build and easier to use. But as a developer, I want to create things that have been bottled up in my head but without the right platform, were fundamentally impossible to do.
Lets look at some of the specifics of what kind of limitations the “no background apps” policy really imposes.
In order to innovate in the communication area, you must have notification. Without it there is no push email, no phone ringing, no instant messaging, no twitter notification, and on and on. Of course some of you will argue that you don’t need third parties to do this stuff because Apple will handle all that internally. But Apple didn’t invent instant messaging. They didn’t invent Twitter. They didn’t invent VOIP. And they certainly didn’t invent the phone. By making third party communications related innovation on the iPhone impossible, they are potentially killing the next great thing that could only be done on a mobile device, but won’t be because Apple forbids it.
Of course, there is more to this than just communications. You can’t even implement something as simple as an alarm clock without notification. More interestingly, there are a whole host of location-based applications that are impossible with the current restrictions. I’d love to have an app that notified me when I was near something that I have been looking for, like perhaps an open house for a piece of real estate in my price range. Personally, I believe that location based notification is the most fertile of all the potential new areas of innovation.
Presence & Location Broadcasting
Presence, the concept of notifying others that you are “available” in instant messaging applications, is a huge and important functionality. It has become important in a whole host of applications beyond pure instant messaging. In the mobile world the concept of presence takes on even more significance, and with location aware devices we can broadcast not just “presence” but location. Again, neither presence nor location broadcasting is possible without background capability and the Internet is filled with discussion about how these concepts can change social dynamics. But as it stands now, none of that innovation or exploration will happen on the iPhone.
The rationale to Apple's position is never directly stated, but is implicitly clear. Background apps are risky because they can harm the user experience by bogging down the phone. This is, of course, bogus. There are many ways to address this through task prioritization, and sandboxing background tasks or even limiting their size. There are lots of other things Apple could do that I won’t list here. But the point is Apple has some of the brightest engineers on the planet, and this issue has been dealt with in Unix, and with real time systems for many years.
In short, this argument is a strategically placed fig leaf, which is easily blown away.
The truth is the real arguments are much more about business and an incredible ambivalence about developers. First, Apple is concerned about protecting its revenue streams, and about being the innovation leader on the platform. More importantly, Steve Jobs has a bizarre discomfort with being out-shined by third party developers on his platforms. And he has a lifelong personal preference for absolute control over everything. Often that drive serves him and his company well. In this case it does not.
The Dishonest SDK Rollout
Of course, its one thing to have a strategically flawed policy. It is another to attempt to trick the market with a disingenuous marketing event. At the SDK launch, Apple presented AOL Instant Messenger as an example of a product that was developed with the SDK for the iPhone. Now I haven’t used the iPhone AIM client, but I can’t imagine that it will not have presence or notification. But with Apple’s current policy, you can’t build a fully functional instant messenger because you can’t do presence or notification without background processing. Obviously AOL got special dispensation to break the rules. But the rest of us developers will not make it past the velvet ropes. And so, at least this aspect of the Apple launch was a lie.
Apple vs. Android
My company has been developing an application for Google’s Android phone operating system for the last several months. Our application would not be easily possible on any platform other than Android. In fact, important parts of the application are impossible without background processing. In short, Apple may be visually sexier, but I can actually innovate more effectively on Android. This should never be. I would love to develop for the iPhone, but right now I can’t see how to make it work. Interestingly, Android also builds maps in at an OS level, which, in my view, is far more important than great animation or the new screen pinching technology.
What To Do
Apple’s new platform will make it possible to create lots of great games, and really pretty more functional user interfaces. But personally, I’ll take one paradigm shifting application over a thousand cool games any day. So if you agree, I strongly suggest you say something. Link to this piece or restate the arguments elsewhere.
Apple does respond to customer uproar, and right now the SDK feedback is way too positive given the significance of what is missing. The market excitement is a response to the eye candy, which is great. Perhaps even necessary. But it is not sufficient. In attempting to keep all innovation to itself, Apple is really doing a disservice not just to us, but in a shortsighted manner, it is also doing a disservice to itself. In fact, Apple is slowing down the kind of innovation that will most assuredly drive the next generation of mobile experiences.
And so I say, Please, Mr. Jobs, tear down that wall.
Wednesday, March 12, 2008
While being shocked by the incredible circumstances surrounding the departure of New York Governor Elliot Spitzer, I am elated that the new Governor will be David Paterson.
While I do not know David, we both grew up in the same Harlem community, and our parents were both part of the same Harlem political machine. My dad was a lawyer and later a judge, and while practicing law, his most important client and closest friend was Congressman and Civil Rights leader Adam Clayton Powell.
As a result, growing up, and throughout my life, all of the Harlem political figures I either came to know personally or at least know of them. While I never actually knew David's father, Basil Paterson, as one of the Deans of the Harlem political community he was a well known figure to me. As a result, when his son became involved in politics I became immediately aware of him as well. While I do not know David personally, the one thing I can say from the many people I know that do know him is how wonderful a person he is and how good he is at what he does.
Of course, only time will tell whether he will be a good Governor, but as a Black man I am always happy to see one of our own break through another barrier, perceived or otherwise. But the most amazing thing is that David is legally blind. As far as I know there has *never* been a blind statewide elected official in this country. Being black, being blind and becoming New York's Governor. That is an incredibly unique and compelling story, if nothing else. I wish him luck.
But I have to say, I am not shocked by the review. The problem with Twine begins before you get to the product itself. I really don't even need to use it to suspect it might suck. It is a problem with everything I have actually seen which relates to the semantic web.
The problem is, no one understands.
As I wrote in my Death of the Relational Database article, many of the ideas that underpin the semantic web are powerful, but they have been designed and executed by people that are all just a bit too smart. Robert Scoble does an interview with Noka Spivack, the CEO of Radar Networks, which is the creator of Twine.
Nova Spivack's problem: Too Smart.
Watch that interview, and then report back in the comments WTF you think Twine is for. I dare you. If you can, it just proves one thing. You are smarter than me.
The problem is that no one in "semantic web land" can explain what it's for. This holds true at the platform and tools level, and, with Twine as example number one, it also holds true at the consumer service level.
So getting back to Marshall's review. The good news in the review is that, apparently, once you actually try the product, it seems as though one might be able to glean that there could be utility there. Marshall's problem is that he couldn't get any value from it and couldn't figure the interface out. His two big headlines were, "It Doesn't Work Very Well", and "It's Poorly Organized".
Interestingly, there are two sides to this coin. One of Marshall's commenters, David Scott Lewis, who is apparently one of the most active users of Twine, responds that the product is useful, but that Marshall didn't give it enough of a chance and didnt do it right. He also says that Twine really isn't ready yet but that it has lots of potential. But interestingly, David feels compelled to tell us that he is not just some blogger, but was an analyst at the Meta Group. In other words, he too is too smart.
The thing is, yes, I have been working with computers for 30 years, but I have no tolerance for consumer products that are too complex. My eyes glaze over. So as I see it, requiring users to be "smart enough to get it" is a very low design target for any supposedly general audience web product. And so, to me, suggesting that the user didn't try hard enough or didn't explore correctly are the kind of comments appropriate for something coming out of a university research lab, but not for a product that is in beta, preparing for public release. Indeed there are some powerful ideas under the hood of semantic web technology. The question is can you make any of these products acessible and obviously useful.
And so, while I am sure Twine has some amazing technology particularly relating to natural language processing, what is clear is that making a mainstream product that could be understood and appreciated by the masses was not job one at Radar Networks. Because if it was, no matter how early it was in the product lifecycle, a reviewer like Marshall Kirkpatrick wouldn't be confused as to how to use the product, and I wouldnt be confused as to WTF its for.
Tuesday, March 11, 2008
The reason, as far as I can tell, is too much complexity. It reminds me of the Y2K problem. When we were approaching the millennium we realized we had all of this code, much of it in COBOL, a dead language, and much of it compiled and in unreadable object code instead of human readable and changeable source code.
We spent billions of dollars modifying and upgrading code, still with no certainty that we were fixing the problem. Confidence only returned on January 1, 2000, when planes didn't fall from the sky and trading systems didn't grind to a halt, generators didn't stop running, etc.
The essence of the problem was complexity and opacity leading to great fear. We had a sense that things that we used to count on before January 1, 2000, we would not be able to count on after January 1, 2000, perhaps costing untold amounts of money and potentially lives.
The same thing is happening in credit markets. We have "bugs" in these markets which we can't understand or predict. but unlike with the Y2K problem, we have no January 1 witching hour to define the endpoint.
If you want to read in more detail about what is going on, you can read this commentary and this commentary by Paul Krugman, and this other New York Times piece. Each piece relates to a different aspect of a core crisis in confidence in different but related credit markets. In summary, the problem is that Wall Street has been just a bit too smart. They have created financial instruments either containing, or dependent on, opaque bundled loan packages that now no one understands. They are called mortgage backed securities, and they have been sliced and diced across multiple owners and so now not only do we not understand them, we don't even know who really owns what.
Even worse, other credit instruments that are not mortgage backed have insurance that is supposed to guarantee their solvency. But since people now don't trust the insurers ability to pay they don't trust the insured instrument. All these securities, of now questionable value, then sit on a broad range of corporate balance sheets in the form of direct investments, and in the form of equity in other companies that hold them. Banks in particular have proven themselves either unwilling, or unable to accurately indicated their financial exposure.
As a result, no one has confidence in anyone else's balance sheet. Companies that are or should be considered AAA credit are having a harder time getting credit, and in some cases can't.
This is profound.
The capital markets are based, in large part, on the ability for us to be able to trust what balance sheets say. We have complex accounting rules and disclosure rules, particularly after Sarbanes-Oxley, that are designed to maximize our ability to trust what we are reading. And while nothing is ever anywhere near certain, the ability for major blue chip companies to sustain losses that we cannot predict based on the drop in assets that *seemed* solid is scary.
My main concern is that there is no specific time we can look forward to when this will go away. There is no clear tick of the clock that will signal the end of the crisis. I am not saying it won't ultimately be resolved, but I am saying, as far as I can tell, this is not, as some people say, a cyclical recession. It is a systemic problem manifesting itself as an economic downturn. And if the system is broken, letting time pass may not be enough.
Again, I am no economist, but to me, this feels more serious than most people are really giving it credit for. And while I don't think this crisis will impact the tech market in the same way that the 2001 bubble burst did, I still worry. We can also be thankful that our markets are getting more and more global which should have the effect of attenuating the damage here in the US. But all that said, I do believe this is a crisis of major proportion, and I am scared. I'm scared for all of us.
Monday, March 10, 2008
First of all, I like Apple. I spent the first eight years or so – from ’86 to 94’ – of my professional life developing Mac hardware, and then software. I used to bleed the Apple rainbow… when Apple’s logo was a rainbow.
I still like Apple’s products, but I think I have a more even handed view. I am not sure how, but I can actually see through the Steve Jobs reality distortion field. I have primarily written good things about Apple and Steve Jobs. I love the iPhone, and I think, while being a twit, that Steve Jobs is brilliant.
Now that said, I think the situation that I wrote about Friday, where Apple has outlawed multi-tasking on their phone for 3rd party apps is insane. But that’s not the purpose of this post.
What I want to talk about is the power of Steve Jobs’ reality distortion field. On Friday I wrote about the multi-tasking issue. I referenced an article by Michael Arrington over at TechCrunch where he included the full 100 page document from Apple that contains the explanation of policy. The issue was also covered at Gizmodo.
And so it was fascinating to see the fanbois descend upon me like locusts, berating me for being so stupid that I could not understand what was “so obviously not true.” None of them bothered to read the actual document, and one of them actually provided a refutation to what seemed like an entirely different issue. I still don’t understand it.
It is the most irrationally emotional response I have gotten since I started blogging. And it brings me to the following conclusion. Jobs has the capacity to make people wacky. I don’t know if he is practicing some advanced form of Neural Linguistic Programming (NLP) or if it is some mentalist parlor trick, or perhaps even mass hypnosis.
But one thing is for sure. Steve Jobs is exceedingly powerful even in small doses. And while, generally, consumption should be considered safe, after watching a Jobs presentation, please wait two hours before driving.
Friday, March 7, 2008
About a year ago I became excited about the idea of developing for the iPhone. That excitement waned as Apple made it clear that they were not supporting 3rd party application development. Since then, Google Android has made its way onto the scene and offered a truly open, truly multi-tasking OS that will probably achieve fairly wide adoption.
But in the last few months, the idea of developing for the iPhone became exciting again. Apple announced they were going to offer an iPhone software development kit (SDK) that would allow third party developers to write iPhone apps.
Today, I am mightily disappointed. This morning I read in TechCrunch that iPhone apps will only be able to be run one at a time. No background functionality. No flipping between applications.
Now this may seem like a nit. But it is huge. Communications applications like instant messaging, VOIP, and really anything else where one wants to broadcast information about yourself (like presence) or monitor other events on the network and react to them, are impossible. Perhaps Apple will allow a few insiders to access multi-tasking functionality, but essentially, though the operating system obviously supports it, none of that will be accessible to developers.
The bottom line is I think we are going to see a lot of cool games for the iPhone, and perhaps other types of applications. But as a platform for building communications applications, the iPhone sucks. Damn.
While I am not really active in that world any more, I still follow it. This morning I read a post by Stefan Richter, an FMS community stalwart, that must have a fair number of people a bit nervous. The post essentially describes a patent which Stefan interprets as Adobe having patented the RTMP (Real Time Media Protocol) protocol which governs the communication between the Flash client and the Flash Media Server.
This post is an effort to lower the heart rate of the Flash open source world, so lets cut to the chase: by my read, Adobe did not patent the RTMP protocol. The open source world is safe.
Adobe has patented a *system* which comprises a media server *and* a media player. In other words a violating system would have to contain both elements. The reason that the FMS clones are safe is because they do not contain a client. They did not create the client, and they are only solving half the problem.
To help understand why things work this way, a little background in patents might be helpful. The way patents work is through what are called claims. A claim is a statement of a collection of things an invention must do to to be considered infringing. Patents generally have numerous claims. In order to examine a patent and test for infringement, the first step is to read each claim of the patent and see whether for that claim, all of the elements of the claim are in the potentially infringing invention.
So in this case, as far as I can see, all of the claims of the Adobe patent describe a system which contains code running on the client *and* the server. And so, since none of the FMS competitors' products contain a media player, no infringement.
On the other hand, what this does prevent is Microsoft from making a Windows Media Server for Silverlight that uses an RTMP-like protocol. I am not sure how necessary it would be to replicate the RTMP for Microsoft to deliver equivalent functionality, but if it is necessary, this patent is a pretty good firewall against that.
In any case I am sure many of you will be reviewing the patent and my analysis. I will be curious to see if any of you think I have missed any server side only claims, or for that matter anything else which might add a different perspective.
Thursday, March 6, 2008
And so it is with satellite radio, specifically XM and Sirius. As I see it, they can’t make it. Now to put this in context, I am a Sirius Satellite subscriber. Why? just one reason – I am a Howard Stern fan. Without Howard I would never subscribe to satellite radio, particularly since I live in New York City, and don’t have a car. But the point is I do have and use a satellite radio service. And I have to say, it’s not bad… if it was 2000.
As a 2008 listening experience, satellite radio is sub par. Yes, Howard and other content like sports broadcasts have value, but for most people that content is the potatoes and not the meat. So as I analyze satellite I do so purely as a music listening experience.
Now you probably think what I am going to suggest is that iPods are the problem. Yes, I am sure iPods have had some impact, but for the most part, Satellite radio really addresses a different set of needs and is not directly competitive. To fully understand the issue we really need to break down the market into its constituent parts. There are three types of music listening experiences.
- Active. This is where you build your own playlists and fully control your listening experience. i.e. the iPod. CDs, etc.
- Passive. You just sit back and listen. This is where traditional broadcast radio and satellite radio are. Channel changing is your only mechanism for control.
- Interactive. Services like Last.FM, Pandora, and Slacker and many others help you with music discovery by providing a radio like experience but with added abilities like social listening, song skipping, and thumbs up/down which impacts your personal listening experience. Because you can still listen passively if you choose, you have the ability to avoid what you don’t like while still discovering new music.
Most new music discovery happens today through a blend of interactive and passive listening.
The problem for satellite radio is that from a quality of experience perspective, interactive services crush satellite radio, while the cost of operating the satellite radio services – billions a year – is one to two orders of magnitude greater than operating an Internet based interactive service. Higher cost of operations, lower quality of experience – not exactly a formula for long-term success.
The one stronghold that satellite has is that it is indeed today far easier to listen to satellite radio in your car. But most of the Internet-based interactive services either have or are developing mobile solutions. These solutions work over cell networks or, in the case of Slacker, by through wifi sync and by licensing a small slice of traditional satellite spectrum. These technology solutions are far cheaper to offer than launching and maintaining your own satellites, and yet the interactivity gives you a far better experience than Sirius/XM.
In 2000, satellite seemed like a really cool thing. But the technology landscape, and infrastructure costs have and will continue to shift unfavorably. And at the end of the day it all boils down to economics. You just can’t loose hundreds of millions of dollars a month, provide an inferior product and expect to survive.
Wednesday, March 5, 2008
SAI's Henry Blodget says he agrees and that not disclosing the his cancer was irresponsible.
First of all, while Jobs is certainly brilliant and the most important asset to the company, he is indeed mercurial, and probably does have the board totally in his pocket. But suggesting that the fact that the board did not break a personal confidence related to health is, in my mind, way beyond their right or fiduciary responsibility. In short, I could not disagree with Henry, and Fortune more strongly here.
Think about the slippery slope here. Who decides when an illness should be disclosed to the public. If he had AIDS should that be disclosed? What about early multiple sclerosis? Does it have to be life threatening? Who determines whether it is life threatening? Is it then the board's responsibility to do "due diligence" on the CEO's illness? If so, what about the rest of senior management. Should Jonathan Ive's health records be made public too. He arguably has almost as significant role at Apple as Jobs.
Part of the problem with this implied disclosure obligation is that from a management perspective, if the board is obligated to tell, then no senior manager who wants help in deciding what to do will be able to discuss it with their board. This substantially harms the company's ability to govern itself, or to discuss things like succession.
Beyond the ridiculousness of a board being expected to independently disclose personal health matters of senior management, the truth is boards are not required to disclose all matters, even purely business matters, to shareholders. In fact keeping product plans, strategies, challenges secret is also part of the fiduciary responsibility of boards.
Legally, while board members do have a specific fiduciary duty known as the "duty of disclosure", as far as I know there is no legal basis to suggest that health information falls under the category of information which one would ever need to disclose under this obligation. Even then, the requirements under duty of disclosure are primarily tied to disclosure in the context of shareholder action.
The bottom line is it is neither ethically nor professionally nor legally reasonable to suggest that boards are obliged to report on health matters of senior executives. On every level it would do more harm than good.
Tuesday, March 4, 2008
I must admit to not being a big Facebook user and I am not that familiar with the details of the API. But what I do know is that something is wrong. Because it appears to be a platform that is genetically incapable of producing anything that seems to be useful. The Platform’s uselessness has risen to the level of broad societal joke where the punch line is "super poke."
I am really not trying to be sarcastic here, but I know I have some super smart subscribers, and so I am just wondering if anybody can help explain this?
Monday, March 3, 2008
The problem is it can't make money.
The economics of user generated video will not work for the foreseeable future. There are numerous reasons for this. I have outlined them below.
- Hosting is expensive because, unlike TV, every incremental viewer costs money, so you have to generate a lot in advertising to support it. Right now the actual user generated video revenue number is just a whisker above zero.
- Producing a video ad is expensive which leaves out search advertisers who would prefer to buy keywords and to spend the *zero* dollars it costs to produce a keyword ad.
- Mainstream advertisers that actually can afford to create good video ads won’t advertise on user generated video because they don’t want their ads stuck in front of some sad looking kid swinging a broom stick around and pretending to be Darth Vader – or worse.
- It’s not clear that pre or post roll advertising works anyway – either for the viewer or the advertiser.
- The real moneymaker for Internet advertising is keyword based search advertising. At no time in the near future will keyword advertising work for web video because there is no accessible text.
Unlike traditional business, getting profitable and then selling does not appear to be an option. Your only exit will be, after having lost a boatload of investor money, to sell your sinking ship to a big web player who is not Google, since they already have their own video fail. The key acquisition criteria will be that the acquirer is in desperate need of help flushing its money down the toilet.