Category: Computers

Oh, you want support?

I don’t know how many open source communities have the same problem, but in the LLVM list we do receive more than a few emails a year with people really upset that no one has fixed their bugs quick enough, or that no one replied to their emails. I find this behaviour quite interesting from a sociological point of view, but if you behave in that way, let me help you straight out: it’s rude. Really.

Business Model

The open source business model relies on sharing of ideas, accumulation of technology and niche development. Small and incremental pieces are incorporated into stabilizing products that provide value to a groups of people.

For example, MacOS and Linux provide different values to the same user base (desktop users). The more commercial software, like MacOS, provide a stable, recognizable interface, with powerful integration to other products of the same line, while the open counterparts provide a more experimental interface, but greater control and spread of knowledge.

Apple’s business model is quite different than most Linux distributions, but both heavily use/derive open source infrastructure (kernel, compilers, libraries). So, if you purchase MacOS, you’re getting not only the eye candy, but also some components that are open source, like LLVM. What companies get from investing in LLVM is up for a different kind of post, but rest assured, the license is really clear: “THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED”.

Presumptuous Crowd

Most Linux/BSD users, when they have a problem with their programs, they first search the web for the error message. In the uncommon case where they don’t find an answer, they then post on forums or mailing lists, often politely, dumping their logs and error messages, and gladly waiting for an answer, that may take a day, a week, sometimes, it may be forgotten. They, then, try a different forum, or “ping” their messages, work a bit harder, find more causes, etc.

After all, no one is as interested in your problems as much as yourself. Let me make that one clear:

No one is as interested in your problems as much as yourself.

Most people that deal with open source understand that. Most people that buy software don’t. But there is an intermediate crowd, that has recently grown tremendously: the freemium folks.

Most people now enjoy an impressive amount of free products, in midst of all the software that they did purchase, and for most of them, they do receive the same quality of support that they do for their paid products. That seems controversial, even paradoxical, but the answer is quite simple: they’re not free.

If you haven’t figured out yet, let’s get that one clear, too: you pay for it with your personal information. Accurate location logs, purchase history, personal identification, credit status, number of friends (and all their personal information too), who you like and who you don’t, etc. All that information is dutifully stored and used for their profit. A profit that is orders of magnitude higher than it would be if they did none of that and you paid $10 for it. Even $100. Hell, even if you paid $1000 per year, it would have been cheaper, or better said, they would have less money from you.

So, it only makes sense that they treat you like a full-paid member of their exclusive club, and treat you like a king so that you don’t jump ship and go share your cat picture on the other social website. Some people quickly understand what’s at stake, but most of them would keep using the service as a matter of convenience. They know the price of their privacy, and they exchange it for convenience.

Market Penetration

As predicted by many in the 90s, and repeated by most in the last decade, open source (free/libre/etc) has taken the roots of computing and is now the base for all technology. From stock markets, to the ISS. From high-performance computing centres to schools. From operating systems to games. Open source is everywhere, and more people that never thought would have any contact with open source, are now getting exposed to it first hand. The pervasion of open source technologies is so complete, that I risk to say that there isn’t any profitable company today that doesn’t use or ship open source with its products. There isn’t a gadget that you own that didn’t use it during design or production, or rely on it for its operation.

And, as with any other technology, occasionally, open source fails. And as they fail, helpful messages pop up where users were expecting a nice “support contract” fixing it straight away. You may contact whoever you paid, and they may help you, or they may give the standard response that it’s not their problem. After all, your privacy is worth a lot of money, but not that much.

Support Contract

Because open source is everywhere, more and more people that were not used how it works are now falling pray to the support contract fallacy.

You may get expedite help from Android “free” apps makers, or social media websites, and they may provide their services for free and still be very friendly and helpful, but you cannot compare that freedom with libre/open source freedom. In free software / open source, we do not store your personal data, not we want to. We do not track your whereabouts, nor we contact your friend on your behalf. We don’t have that freedom, mostly because that’s not our business model, but also because most of us believe that’s wrong.

Because you’re not directly, nor indirectly, paying us, you cannot, ever expect that anyone will help you, less still, in any reasonable time. The overwhelming majority of people working in open source projects are directly or indirectly paid by companies, and that’s their day jobs. Folks that fix the problems that their companies think will best improve their products. Only a small minority of lucky bastards can work on free software without getting any compensation or direction from a company, but even those people have their own agenda. And that’s very rarely aligned with yours.

Expecting support, complaining about the lack of help or interest in your problems, is like carrying a large bag through the underground and be mad a people for not helping you. Granted, many people will help you, but as a selfless act, not as a support contract. Only those that are going in the same direction, or those that have a free hand, or that have some shared history (like, they have been in the same situation before), will likely help you, and different people align differently with your problem. If it’s a large suitcase, or a baby pram, or some clumsy and fragile painting. Different people will help in different times.

In libre/open source, the situation is exactly the same. We’re all working along our own projects and priorities, and unless your problem is directly related to my paid job, I will rarely even look at it. It’s not out of spite, but if I stop doing the work I’m paid and start helping all those in need, I’ll lose my job and I won’t be able to help anyone any more. Not to mention feed my family.

The social contract

When you send an email that no one pays attention, try to phrase it differently. Or better yet, do some more investigations, provide more information, show that you care about what you’re asking. There’s nothing worse in a forum, than people asking others to solve their homework. The general rule of free help is that you must show equal or more interest and sweat on what you’re asking, than the people that are helping you. It’s exactly the opposite than on a support contracts. Moreover, your behaviour will tell people whether to help you or not. The more aggressive and demanding you become, the less people will help you. The more humble and hard working you are, the opposite will happen.

To understand that social contract, think of it as an exchange. If you bring a lot of information with your request, I will learn a thing of two about that. I enjoy learning, so, even if it’s not my area, I may feel compelled to help you just because you might teach me something. If there is any payment in community help, this is it. The knowledge you pass on to people helping you, and the joy they feel of learning a new thing and helping a nice chap.

In the end, most people that are new to such environments, end up learning it really fast, and become enthusiastic contributors. This is, for me, the beauty of the lack of payments. Each one values the newly acquired knowledge in different ways, so it’d be impossible to treat them as standard currency. But, since I don’t tell you how much I value your contribution, and vice versa, we cannot know who has the profit. More importantly, in this case, profit is not the difference between my gains and your gains, but the difference between my expectation of gains and my actual gains, which is completely independent of your exchange ratios.

This is precisely what Buckminster Fuller meant as Synergetics. The total system behaviour is not always predictable from the behaviour of all its parts, and in some systems, the value aggregated can be more than the sum of its individual gains. This is why the open source business model is so infectious and addictive. Once you’re in, there’s no way out. But you have to put some effort.

And he’s dead…

No, not the one everyone is talking about. The one that actually made it all work.

Not the one that was worried about uniforms and style, the one that actually designed and develop the foundations of modern society.

Not the one that enclosed people into a dungeon of usability, but the one that created the tools to enable everyone’s freedom.

The one whose work made possible the computer revolution in the 70s, the micro-computer revolution in the 80s, the open-source revolution in the 90s and the mobile revolution this last decade. Without Unix and C, and their simplistic but elegant design, the stronghold of modern society, none of this would be possible. We’d still be fighting over who invented the bloody pipe.

Rest in peace, Dennis MacAlistair Ritchie and may your wisdom embedded in the world today, linger as much as possible in our minds.

UPDATE: (Wired) Dennis Ritchie: The Shoulders Steve Jobs Stood On

Science vs. Business

Since the end of the dark ages, and the emergence of modern capitalism, science has been connected to business, in one way or another.

During my academic life and later (when I moved to business), I saw the battle of those that would only do pure science (with government funding) and those that would mainly do business science (with private money). There were only few in between the two groups and most of them argued that it was possible to use private money to promote and develop science.

For years I believed that it was possible, and in my book, the title of this post wouldn’t make sense. But as I dove into the business side, every step closer to business research than before, I realised that there is no such thing as business science. It is such a fundamental aspect of capitalism, profit, that make it so.

Copy cats

Good mathematicians copy, best mathematicians steal. The three biggest revolutions in computing during the last three decades were the PC, the Open Source and Apple.

The PC revolution was started by IBM (with open platforms and standard components) but it was really driven by Bill Gates and Microsoft, and that’s what generated most of his fortune. However, it was a great business idea, not a great scientific one, as Bill Gates copied from a company (the size of a government), such as IBM. His business model’s return on investment was instantaneous and gigantic.

Apple, on the other hand, never made much money (not as much as IBM or Microsoft) until recently with the iPhone and iPad. That is, I believe, because Steve Jobs copied from a visionary, Douglas Engelbart, rather than a business model. His return on investment took decades and he took one step at a time.

However, even copying from a true scientist, he had to have a business model. It was impossible for him to open the platform (as MS did), because that was where all the value was located. Apple’s graphical interface (with the first Macs), the mouse etc (all blatantly copied from Engelbart). They couldn’t control the quality of the software for their platform (they still can’t today on AppStore) and they opted for doing everything themselves. That was the business model getting in the way of a true revolution.

Until today, Apple tries to do the coolest system on the planet, only to fall short because of the business model. The draconian methods Microsoft took on competitors, Apple takes on the customers. Honestly, I don’t know what’s worse.

On the other hand, Open Source was born as the real business-free deal. But its success has nothing to do with science, nor with the business-freeness. Most companies that profit with open source, do so by exploiting the benefits and putting little back. There isn’t any other way to turn open source into profit, since profit is basically to gain more than what you spend.

This is not all bad. Most successful Open source systems (such as Apache, MySQL, Hadoop, GCC, LLVM, etc) are so because big companies (like Intel, Apple, Yahoo) put a lot of effort into it. Managing the private changes is a big pain, especially if more than one company is a major contributor, but it’s more profitable than putting everything into the open. Getting the balance right is what boosts, or breaks, those companies.

Physics

The same rules also apply to other sciences, like physics. The United States are governed by big companies (oil, weapons, pharma, media) and not by its own government (which is only a puppet for the big companies). There, science is mostly applied to those fields.

Nuclear physics was only developed at such a fast pace because of the bomb. Laser, nuclear fusion, carbon nanotubes are mostly done with military funding, or via the government, for military purposes. Computer science (both hardware and software) are mainly done on the big companies and with a business background, so again not real science.

Only the EU, a less business oriented government (but still, not that much less), could spend a gigantic amount of money on the LHC at CERN to search for a mere boson. I still don’t understand what’s the commercial applicability of finding the Higgs boson and why the EU has agreed to spend such money on it. I’m not yet ready to accept that it was all in the name of science…

But while physics has clear military and power-related objectives, computing, or rather, social computing, has little to no impact. Radar technologies, heavy-load simulations, and prediction networks receive a strong budget from governments (especially US, Russia), while other topics such as how to make the world a better place with technology, has little or no space is either business or government sponsored research.

That is why, in my humble opinion, technology has yet to flourish. Computers today create more problems than they solve. Operating systems make our life harder than they should, office tools are not intuitive enough for every one to use, compilers always fall short of doing a great job, the human interface is still dominated by the mouse, invented by Engelbart himself in the 60’s.

Not to mention the rampant race to keep Moore’s law (in both cycles and profit) at the cost of everything else, most notably the environment. Chip companies want to sell more and more, obsolete last year’s chip and send it to the land fills, as there is no efficient recycling technology yet for chips and circuits.

Unsolved questions of the last century

Like Fermat’s theorems, computer scientists had loads of ideas last century, at the dawn of computing era, that are still unsolved. Problems that everybody tries to solve the wrong way, as if they were going to make that person famous, or rich. The most important problems, as I see, are:

  • Computer-human interaction: How to develop an efficient interface between humans and computers as to remove all barriers on communication and ease the development of effective systems
  • Artificial Intelligence: As in real intelligence, not mimicking animal behaviour, not solving subset of problems. Solutions that are based on emergent behaviour, probabilistic networks and automatons.
  • Parallel Computation: Natural brains are parallel in nature, yet, computers are serial. Even parallel computers nowadays (multi-core) are only parallel to a point, where they go back on being serial. Serial barriers must be broken, we need to scratch the theory so far and think again. We need to ask ourselves: “what happens when I’m at the speed of light and I look into the mirror?“.
  • Environmentally friendly computing: Most components on chips and boards are not recyclable, and yet, they’re replaced every year. Does the hardware really need to be more advanced, or the software is being dumber and dumber, driving the hardware complexity up? Can we use the same hardware with smarter software? Is the hardware smart enough to last a decade? Was it really meant to last that long?

All those questions are, in a nutshell, in a scientific nature. If you take the business approach, you’ll end up with a simple answer to all of them: it’s not worth the trouble. It is impossible, at short and medium term, to profit from any of those routes. Some of them won’t generate profit even in the long term.

That’s why there is no advance in that area. Scientists that study such topics are alone and most of the time trying to make money out of it (thus, going the wrong way and not hitting the bull’s eye). One of the gurus in AI at the University of Cambridge is a physicist, and his company does anything new in AI, but exploits the little effort on old school data-mining to generate profit.

They do generate profit, of course, but does it help to develop the field of computer science? Does it help tailor technology to better ourselves? To make the world a better place? I think not.

Task Driven Computing

Ever since Moore’s idea became a law (by providence), and empires were built upon this law, little has been thought about the need for such advancements. Raw power is considered to many the only real benchmark to what a machine can be compared to others. Cars, computers and toasters are all alike in those matters, and are only as good as their raw throughput (real or not).

With the carbon footprint disaster, some people began to realise (not for the correct reasons) that maybe we don’t actually need all that power to be happy. Electric cars, low-powered computers and smart-appliances are now appealing to the final consumer and, for good or bad, things are changing. The rocketing growth of the mobile market (smartphones, netbooks and tablets) in recent years is a good indicator that the easily seduced consumer mass has now being driven towards leaner, more efficient machines.

But, how lean are we ready to go? How much raw power are we willing to give away. In other words, how far goes the appeal that the media push on us to relinquish those rights bestowed by Moore? It seems not so much, with all chip companies fighting for a piece of the fat market (as well as the lean, but).

What is the question, anyway?

Ever since that became a trend, the question has always been: “how lean can we make our machine without impacting on usability?”. The focus so far has only been on creating smarter hardware, to a lesser extent (and only recently) reducing the unneeded fat of operating systems and applications, but no one ever touches the fundamental question: “Do we really need all that?“.

The questions is clearly cyclic. For example, you wouldn’t need a car if the public transport was decent. You wouldn’t need health insurance if the public health system was perfect, and so on. With computing is the same. If you rely on a text editor or a spreadsheet, it has to be fast and powerful, so you can finish your work on time (and not get fired). If you are a developer and have to re-compile your code every so often, you need a damn good (in CPU and memory) computer to make it as painless as possible. Having a slow computer can harm the creative process that involves all tasks around it, and degrade the quality of your work to an unknown quantity.

Or does it?

If you didn’t have to finish your work quicker, would you still work the same way? If you didn’t have to save your work, or install additional software, just because the system you’re working on only works on a particular type of computer (say, only available on your workplace). If you could perform tasks as tasks and not a whole sequence of meaningless steps and bureaucracy, would you still take that amount of time to finish your task?

Real world

Even though the real world is not that simple, one cannot take into account the whole reality on each investigation. Science just doesn’t work that way. To be effective, you take out all but one variable and test it. One by one, until you have a simplified picture, a model of reality. If on every step you include the whole world, the real world, in your simulations, you won’t get far.

There is one paper that touched some of these topics back in 2000, and little has changed since then. I dare to say that it actually got worse. With all these app stores competing for publicity and forcing incompatibility with invisible boundaries, has only made matters worse. It seems clear enough for me that the computing world, as far I can remember (early 80’s) was always like that and it’s not showing signs of change so far.

The excuse to keep doing the wrong thing (ie. not thinking clearly about what a decent system is) was always because “the real world is not that simple”, but in fact, the only limitation factor has been the greed of investors who cannot begin to understand that a decent system can bring more value (not necessarily money) than any quickly designed and delivered piece of software available today.

Back in the lab…

Because I don’t give a fig to what they think, I can go back to the lab and think clearly. Remove greed, profit and market from the table. Leave users, systems and what’s really necessary.

Computers were (much before Turing)  meant to solve specific problems. Today, general purpose computers create more problems than they solve, so let’s go back to what the problem is and lets try to solve it without any external context: Tasks.

A general purpose computer can perform a task in pretty much the same way as any other, after all, that’s why they’re called “general purpose”. So the system that runs on it is irrelevant, if it does not perform the task, it’s no good. A good example of that are web browsers. Virtually every browser can render a screen, and show surprisingly similar results. A bad example is a text editor, which most of them won’t even open another’s documents, and if they do, the former will do all in its power to make the result horrid in the latter.

Supposing tasks can be done seamlessly on any computer (lets assume web pages for the moment), than does the computer only computes that task, or is it doing other things as well?

All computers I know of will be running, even if broken, until they’re turned off. Some can increase and decrease their power consumption, but they’ll still be executing instructions to the world’s end. According to out least-work principle (to execute tasks), this is not particularly relevant, so we must take that out of our system.

Thus, such a computer can only execute when a task is requested, it must complete that task (and nothing else more), and stop (really, zero watts consumption) right after that.

But this is madness!

A particular task can take longer to execute, yes. It’ll be more difficult to execute simultaneous tasks, yes. You’ll spend more cycles per particular task than usual, yes! So, if you still thinking like Moore, than this is utter madness and you can stop reading right now.

Task Driven Computing

For those who are still with me, let me try to convince you. Around 80% of my smartphone’s battery is consumed by the screen. The rest is generally spend on background tasks (system daemons) and only about 5% on real tasks. So, if you could remove 95% of your system’s consumption, you could still take 20x more power consumption for your tasks and be even.

Note that I didn’t say “20x the time”, for that’s not necessarily true. The easiest way to run multiple tasks at the same time is to have multiple CPUs in a given system. Today that doesn’t scale too well because the operating systems have to control them all, and they all just keep running (even when idle) and wasting a huge amount of power for nothing.

But if your system is not designed to control anything, but to execute tasks, even though you’ll spend more time per task, you’ll have more CPUs working on tasks and less on background maintenance. Also, once the task is done, the CPU can literally shut down (I mean, zero watts) and wait for the next task. There is no idle cost, there is no operational code being run to multi-task or to protect memory or avoid race-conditions.

Problems

Of course, that’s not as easy as it sounds. Turning on and off CPUs is not that trivial, running tasks with no OS underneath (and expecting them to communicate) is not an easy task, and fitting multiple processors into a small chip is very expensive. But, as I said earlier, I’m not concerned with investors, market or money, I’m concerned with technology and it’s real purpose.

Also, the scaling is a real problem. Connection Machines were built and thrown away, clusters have peak performance way above their average performance levels, and multi-core systems are hard to work with. Part of that is real, the interconnection and communication parts, but the rest was artificially created by operating systems to solve new problems in an old way, just because it was cheaper, or quicker, or easier.

Back in the days…

I envy the time of the savants, when they had all the time and money in the world to solve the problems of nature. Today, the world is corrupted by money and even the most prominent minds in science are corrupt by it, trying to be the first to do such and such, protecting research from other peers just to claim a silly Nobel prize or to be world famous.

The laws of physics had led us into it, we live in the local minima of the least energetic configuration possible, and that’s here, now. To get our of any local minima we need a good kick, something that will take us out in a configuration of a more energetic configuration, but with enough luck, we’ll fall into another local minima that is less energetic than this one. Or, we we’re really the masters of the universe, maybe we can even live harmoniously in a place of local maxima, who knows!?

Touch-screen keyboard

I’ve been using virtual keyboards for a while (iPad, Android phone) and, while it’s good enough, it made me wonder…

QWERTY

The QWERTY keyboard was invented in the 19th century and before computers had keyboards, it was mainly used for typing letters. The advanced feature of a typewriter was the SHIFT key.

When computers had their input changed from switches and punched cards to keyboards and printers (before monitors were invented), the natural choice was to use the ubiquitous QWERTY layout. But because of the nature of computing, many additional keys were needed. Since the single most important function of a keyboard was to input code (and not Word documents), the SHIFT concept was extended with CONTROL keys, the FUNCTION keys were added and some other impossible concepts in typewriter, such as the Home/End, Insert/Replace, etc.

All that was added around the traditional keyboard, and today it’s now ubiquitous as well. All editors (code and otherwise), use extensively those keys and it’d be impossible to imagine a keyboard without it. But, to be honest, the layout of the computer keyboard did not technically have to mimic the old fashioned typewriter. It just did to ease the transition between writing letters on paper and code on silicon.

Nowadays, the virtual keyboard is, again, mimicking the 120 years-old layout, just because everyone got use to, but now the excuses to keep it are fading. I do not know anyone that still uses a typewriter, do you? Also, most of the extra keys are still only used by coders of some sort, including Vim, Emacs, Photoshop and Excel.

Swipe movements

Clicking on links, editing text and drag&dropping is very awkward on the iPad (not to mention on phones), so that’s not going to stick more than a decade. However, gestures are so intuitive on touch-screen interfaces that it can easily become mainstream, if done right.

One browser I use on Android (Dolphin) has hand gestures, and I have to say it’s horrible and too complicated. It’s based on the old mouse-gestures that, by definition, is out-dated and not technologically compatible.

Touch gestures have to be more natural, like moving objects on your desk. One way to do this is to use the (now famous) two-finger swiping. Another is to add hot-areas to your touch screen, where people know are for specific purposes. For example, imagine a touch-screen keyboard the size of an iPad (10″, no screen on it, just the keyboard). Remove every other key than the QWERTY keyboard itself. Just as you would use the lower area as a pointing device, the lower area (either swiping or tapping) brings a full-screen pointing device, with all gestures and controls one needs. Another tap would bring the main keyboard back.

The same way, right and left screens would bring editing capabilities to your programs, and each program could have its own screen. Some could revert back to the main keyboard as you press on key (as to save the return tap), others would require you to press multiple keys are the same time. The top part could bring the multi-media area, with animated buttons, etc.

Feed-back

All that is not complete without some feed-back, the worse thing on using virtual keyboards. The click noise is easy enough, but tactile is coming a long way without really going anywhere. Microsoft, Apple, Nokia, Sony, all tried (and patented) solutions to tactile touch-screen and yet, not mainstream device today uses one. I’m sure that’s mostly due to technical difficulties (maybe battery life, or bulkiness), none of them critical to a keyboard.

When playing games on the phone that emulate joysticks (SNES emulator, or other Android-specific ones), I often die because of the lack of tactile feed-back, that is, I take my finger off the D-pad without noticing. This is most annoying and virtual keyboards aren’t going anywhere without a decent feed-back system.

Some laptop/tablet cross-breeds, have dual touch-screens for that purpose, and I think (or rather, hope), that this is the future. But they need to change the layout of how keyboards are supposed to work. Luckily, that can be done all in software, and Linux is an open system, which anyone could implement it.

If you do, please open source it and make it free. Any penny you get from it is a second away from it being universally accepted.

Dream Machine (take 2)

More than three years ago I wrote about the desktop I really wanted… Now it’s time to review that and make some new speculations…

Back Then

The key issues I raised back then were wireless technology, box size, noise, temperature and the interface.

Wireless power hasn’t progressed as much as I’d like, but all the rest (including wireless graphic cards) are already at full steam. So, apart from power, you don’t need any cables. Also, batteries are getting a bit better (not as fast as I’d like, too), so there is another stop-gap for wireless power.

Box size has reduced dramatically since 2007. All the tablets are almost full computers and with Intel and ARM battling for the mid-size form-factor, we’ll see radical improvements with lower power consumption, smaller sizes, much cooler CPUs and consequently, no noisy fans. Another thing that is bound to reduce temperature and noise is the speed in which solid-state drives are catching up with magnetic ones.

But with regard to the interface, I have to admit I was a bit too retro. Who needs 3D glasses, or pointer hats to drive the cursor on the screen? Why does anyone needs a cursor in the first place? Well, that comes to my second dream machine.

Form Factor

I love keyboards. Writing for (int i=0; i<10; i++) { a[i] = i*M_PI; } is way easier than try to dictate that and hope it gets the brackets, increments and semi-colons correctly. Even if the dictation software was super-smart, I still would feel silly dictating that. Unless I can think and the computer creates the code for me the way I want, there no better interface than the keyboard.

Having a full-size keyboard also allows you to spare some space for the rest of the machine. Transparent CPUs, GPUs and storage are still not available (nor I think will be in the next three years), so putting it into the monitor is a no-go. Flat keyboards (like the Mac ones) are a bit odd and bad for ergonomics, so a simple ergonomic keyboard with the basic hardware inside would do. No mouse, of course, nor any other device except the keyboard.

A flat transparent screen, of some organic LED or electronic paper, with the camera built-in in the centre of the screen, just behind it. So, on VoIP conversations, you look straight into the eyes of the interlocutor. Also, transparent speakers are part of the screen, half-right and half-left are screen + speakers, with transparent wiring as well. All of that, wireless of course. It should be extra-light, so just a single arm to hold the monitor, not attached to the keyboard. You should be able to control the transparency of the screen, to change between VoIP and video modes.

Hardware

CPUs and GPUs are so 10’s. The best way to go forward is to have multi-purpose chips, that can turn themselves (or their parts) on and off at will, that can execute serial or vector code (or both) when required. So, a 16/32 core machine, with heavily pipelined CPU/GPUs, on multiple buses (not necessarily all active at the same time, or for the same communication purpose), could deal with on-demand gaming, video streaming, real-time ray-tracing and multi-threaded compilation without wasting too much power.

On a direct comparison, any of those CPU/GPU dies would have a fraction of the performance of a traditional mono-block chip, but their inherent parallelism and if the OS/drivers are written based on that assumption, a lot of power can be extracted from them. Also, with so many chips, you can selectively use only as much as you need for each task for specific applications. So, a game would use more GPUs than CPUs, probably with one or two CPUs to handle interface and sound. When programming, one or two CPUs can handle the IDE, while the other can compile your code in background. As all of this is on-demand, even during the game you could have a variable number of chips working as GPUs, depending on the depth of the world it’s rendering.

Memory and disk are getting cheaper by the second. I wouldn’t be surprised if in three years 128GB of memory and 10TB of solid-state disk are the new minima. All that, fitting nicely alongside the CPU/GPU bus, avoiding too many hops (NB+PCI+SATA+etc) to get the data in and out would also speed up the storage/retrieval of information. You can probably do a 1s boot up from scratch without the necessity of sleeping any more, just pure hibernate.

Network, again, wireless of course. It’s already a reality for a while, but I don’t expect it to increase considerably in the next 3 years. I assume broadband would increase a few percent, 4G will fail to deliver what it promises when the number of active clients reach a few hundred and the TV spectrum requires more bureaucracy than the world can handle. The cloud will have to wait a bit more to get where hard drives are today.

Interface

A few designs have revolutionized interfaces in the last three years. I consider the pointer-less interface (decent touch screen, camera-ware) and the brain interface as the two most important ones. Touch-screens are interesting, but they are cumbersome as your limbs get in the way of the screen you’re trying to interact with. The Wii-mote was a pioneer, but the MS Kinect broke the barrier of usability. It’s still in its early stages, but as such, it’s a great revolution and because of the unnatural openness of Microsoft about it, I expect it to baffle even the most open minded ones.

On the other hand, brain interfaces only began this year to be usable (and not that much so), the combination of a Kinect, with a camera that reads your eyes and the brain interface to control interactions with the items on the screen should be enough to work efficiently and effectively.

People already follow the mouse with their eyes, it’s easy to teach people to make the pointer follow their eyes. But to remove uncertainties and get rid once and for all of the annoying cursor, you need a 3D camera to take into account your position relative to the screen, the position of other people (that could also interact with the screen on a multi-look interface) and think together to achieve goals. That has applications from games to XP programming.

Voice control could also be used for more natural commands such as “shut-up” or “play some jazz, will ya?”. Nothing too complex, as that’s another field that is crawling for decades and hasn’t have a decent sprint since it started…

Cost

The cost of such a machine wouldn’t be too high, as the components are cheaper than today’s complex motherboard designs, with multiple interconnection standards, different manufacturing processes and tests (very expensive!). The parts themselves would maybe be a bit expensive, but in such volumes (and standardised production) the cost would be greatly reduced.

To the environment, not so much. If mankind continues with the ridiculous necessity of changing their computers every year, a computer like that would fill up the landfills. The integration of the parts is so dense (eg monitor+cameras+speakers in one package) that would be impossible to recycle that cheaper than sending it to the sun to burn (a not so bad alternative).

But in life, we have to choose what’s really important. A nice computer that puts you in a chair for the majority of your life is more important that some pandas and bumble bees, right?

Fool me once, shame on you… fool me twice, shame on me (DBD)

Defective by design came with a new story on Apple’s DRM. While I don’t generally re-post from other blogs (LWN already does that), this one is special, but not for the apparent reasons.

I agree that DRM is bad, not just for you but for business, innovation, science and the evolution of mankind. But that’s not the point. What Apple is doing with the App store is not just locking other applications from running on their hardware, but locking their hardware out of the real world.

In the late 80’s – early 90’s, all hardware platforms were like that, and Apple was no exception. Amiga, Commodore, MSX and dozens of others, each was a completely separate machine, with a unique chipset, architecture and software layers. But that never stopped people writing code for it, putting on a floppy disk and installing on any compatible computer they could find. Computer viruses spread out that way, too, given the ease it was to share software in those days.

Ten years later, there was only a handful of architectures. Intel for PCs, PowerPC for Mac and a few others for servers (Alpha, Sparc, etc). The consolidation of the hardware was happening at the same time as the explosion of the internet, so not only more people had the same type of computer, but they also shared software more easily, increasing the quantity of software available (and viruses) by orders of magnitude.

Linux was riding this wave since its beginning, and probably that was the most important factor why such an underground movement got so much momentum. It was considered subversive, anti-capitalist to use free software and those people (including me) were hunt down like communists, and ridiculed as idiots with no common-sense. Today we know how ridicule it is to use Linux, most companies and governments do and would be unthinkable today not to use it for what it’s good. But it’s not for every one, not for everything.

Apple’s niche

Apple always had a niche, and they were really smart not to get out of it. Companies like Intel and ARM are trying to get out of their niche and attack new markets, to maybe savage a section of economy they don’t have control over. Intel is going small, ARM is going big and both will get hurt. Who get’s more hurt doesn’t matter, what matter is that Apple never went to attack other markets directly.

Ever since the beginning, Apple’s ads were in the lines of “be smart, be cool, use Apple”. They never said their office suite was better than Microsoft’s (as MS does with Open Office), or that their hardware support was better (like MS does with Linux). Once you compare directly your products with someone else’s, you’re bound to trouble. When Microsoft started comparing their OS with Linux (late 90’s), the community fought back showing all the areas in which they were very poor, and businesses and governments started doing the same, and that was a big hit on Windows. Apple never did that directly.

By being always on the sidelines, Apple was the different. In their own niche, there was no competitor. Windows or Linux never entered that space, not even today. When Apple entered the mobile phone market, they didn’t took market from anyone else, they made a new market for themselves. Who bought iPhones didn’t want to buy anything else, they just did because there was no iPhone at the time.

Android mobile phones are widespread, growing faster than anything else, taking Symbian phones out of the market, destroying RIM’s homogeneity, but rarely touching the iPhone market. Apple fan-boys will always buy Apple products, no matter the cost or the lower quality in software and hardware. Being cool is more important than any of that.

Fool me once again, please

Being an Apple fan-boy is hard work. Whenever a new iPhone is out, the old ones disappear from the market and you’re outdated. Whenever the new MacBook arrives, the older ones look so out-dated that all your (fan-boy) friends will know you’re not keeping up. If by creating a niche to capture the naiveness of people and profit from it is fooling, than Apple is fooling those same people for decades and they won’t stop now. That has made them the second biggest company in the world (loosing only for an oil company), nobody can argue with that fact.

iPhones have a lesser hardware than most of the new Android phones, less functionality, less compatibility with the rest of the world. The new MacBook air has an Intel chip several years old, lacks connectivity options and in a short time won’t run Flash, Java or anything Steve Jobs dislike when he wakes up from a bad dream. But that doesn’t affect a bit the fan-boys. See, back in the days when Microsoft had fan-boys too, they were completely oblivious to the horrendous problems the platform had (viruses, bugs, reboots, memory hog etc) and they would still mock you for not being on their group.

That’s the same with Apple fan-boys and always have been. I had an Apple ][, and I liked it a lot. But when I saw an Amiga I was baffled. I immediately recognized the clear superiority of the architecture. The sound was amazing, the graphics was impressive and the games were awesome (all that mattered to me at that time, tbh). There was no comparison between an Amiga game and an Apple game at that time and everybody knew it. But Apple fan-boys were all the same, and there were fights in BBSs and meetings: Apple fan-boys one side, Amiga fan-boys on the other and the pizza would be over long before the discussion would cool down.

Nice little town, invaded

But today, reality is a bit harder to swallow. There is no PowerPC, or Alpha or even Sparc now. With Oracle owning Sparc’s roadmap, and following what they are doing to Java and OpenOffice, I wouldn’t be surprised if Larry Ellison one day woke up and decided to burn everything down. Now, there are only two major players in the small to huge markets: Intel and ARM. With ARM only being at the small and smaller, it leaves Intel with all the rest.

MacOS is no longer an OS per se. Its underlying sub-system is based on (or ripped off from) FreeBSD (a robust open source unix-like operating system). As it goes, FreeBSD is so similar to Linux that it’s not hard to re-compile Linux application to run on it. So, why should it be hard to run Linux application on MacOS? Well, it’s not, actually. With the same platform and a very similar sub-system, re-compiling Linux application to Mac is a matter of finding the right tools and libraries, everything else follows the natural course.

Now, this is dangerous! Windows has the protection of being completely different, even on the same platform (Intel), but MacOS doesn’t and there’s no way to keep the penguin’s invasion at bay. For the first time in history, Apple has opened its niche to other players. In Apple terms, this is the same as to kill itself.

See, capitalism is all about keeping control of the market. It’s not about competition or innovation, and it’s clearly not about re-distribution of capital, as the French suggested in their revolution. Albeit Apple never fought Microsoft or Linux directly, they had their market well in control and that was the key to their success. With very clever advertising and average quality hardware, they managed to build an entire universe of their own and attract a huge crowd that, once in, would never look back. But now, that bubble has been invaded by the penguin commies, and there’s no way for them to protect that market as they’ve done before.

One solution to rule them all

On a very good analysis of the Linux “dream”, this article suggests that it is dead. If you look to Linux as if it was a company (following the success of Canonical, I’m not surprised), he has a point. But Linux is not Canonical, nor a dream and it’s definitely not dead.

In the same line, you could argue that Windows is dead. It hasn’t grown up for a while, Vista destroyed the confidence and moved more people to Macs and Linux than ever before. The same way, more than 10 years ago, a common misconception for Microsoft’s fan-boys was that the Mac was dead. Its niche was too little, the hardware too expensive and incompatible with everything else. Windows is in the same position today, but it’s far from dead.

But Linux is not a company, it doesn’t fit the normal capitalist market analysis. Remember that Linux hackers are commies, right? It’s an organic community, it doesn’t behave like a company or anything capitalism would like to model. This is why it has been so many times wrongly predicted (Linux is dead, this is the year of Linux, Linux will kill Windows, Mac is destroying Linux and so on). All of this is pure bollocks. Linux growth is organic, not exponential, not bombastic. It won’t kill other platforms. Never had, never will. It will, as it has done so far, assimilate and enhance, like the Borg.

If we had Linux in the French revolution, the people would have a better chance of getting something out of it, rather than letting all the glory (and profit) to the newly founded bourgeoisie class. Not because Linux is magic, but because it embraces changes, expand the frontiers and expose the flaw in the current systems. That alone is enough to keep the existing software in constant check, that is vital to software engineering and that will never end. Linux is, in a nutshell, what’s driving innovation in all other software fronts.

Saying that Linux is dead is the same as saying that generic medication is dead because it doesn’t make profit or hasn’t taken over the big pharma’s markets. It simply is not the point and only shows that people are still with the same mindset that put Microsoft, Yahoo!, Google, IBM and now Apple where they are today, all afraid of the big bad wolf, that is not big, nor bad and has nothing to do with a wolf.

This wolf is, mind you, not Linux. Linux and the rest of the open source community are just the only players (and Google, I give them that) that are not afraid of that wolf, but, according to business analysts, they should to be able to play nice with the rest of the market. The big bad wolf is free content.

Free, open content

Free as in freedom is dangerous. Everybody knows what happens when you post on Facebook about your boss being an ass: you get fired. The same would happen if you said it out loud in a company’s lunch, wouldn’t it? Running random software in your machine is dangerous, everybody knows what can happen when virus invade your computer, or rogue software start stealing your bank passwords and personal data.

But all systems now are very similar, and the companies of today are still banging their heads against the same wall as 20 years ago: lock down the platform. 20 years ago that was quite simple, and actually, only the reflection of the construction process of any computer. Today, it has to be actively done.

It’s very easy to rip a DVD and send it to a friend. Today’s broadband speeds allow you to do that quite fast, indeed. But your friend haven’t paid for that, and the media companies felt threatened. They created DRM. Intel has just acquired McAfee to put security measures inside the chip itself. This is the same as DRM, but on a much lower level. Instead of dealing with the problem, those companies are actually delaying the solution and only making the problem worse.

DRM is easily crackable. It has been shown over and over that any DRM (software or hardware) so far has not resisted the will of people. There are far more ingenious people outside companies that do DRM than inside, therefore, it’s impossible to come up with a solution that will fool all outsiders, unless they hire them all (which will never happen) or kill them all (which could happen, if things keep the same pace).

Unless those companies start looking at the problem as the new reality, and create solutions to work in this new reality, they won’t make any money out of it. DRM is not just bad, but it’s very costly and hampers progress and innovation. It kills what capitalism loves most: profit. Take all the money spent on DRM that were cracked a day later, all the money RIAA spent on lawsuits, all the trouble to create software solutions to lock all users and the drop-out rate which happens when some better solution appears (see Google vs. Yahoo) and you get the picture.

Locked down society

Apple’s first popular advertisement was the one mocking Orwell’s 1984 and how Apple would break the rules by bringing something completely different that would free people of the locked down world they lived in. Funny though, how things turned out…

Steve Jobs say that Android is a segmented market, that Apple is better because it has only one solution to every problem. They said the same thing about Windows and Linux, that the segmentation is what’s driving their demise, that everybody should listen to Steve Jobs and use his own creations (one for each problem) and that the rest was just too noisy, too complicated for really cool people to use.

I don’t know you, but for me that sounds exactly like Big Brother’s speech.

With DRM and control of the ApStore, Apple has total freedom to put in, or take out, whatever they want, whenever they want. It has happened and will continue to happen. They never put Flash in iPhones, not because of any technical reason, but just because Steve Jobs doesn’t like it. They’re now taking Java out of the Mac “experience”, again, just for kicks. Microsoft at least put .NET and Silverlight in place, but Apple simply takes out, no replacements.

Oh, how Apple fan-boys like it. They applaud, they defend with their lives, even having no knowledge of why nor even if there is any reason for it. They just watch Steve Jobs speech and repeat, word by word. There is no reason, and those people are sounding every day more dumb than anything else, but who am I to say so? I’m the one out of the group, I’m the one who has no voice.

When that happened to Microsoft in the 90’s, it was hard to take it. The numbers were more like 95% of them and 1% of us, so there was absolutely no argument that would make them understand the utter garbage they were talking about. But today, Apple market is still not big enough, so the Apple fan-boys are indeed making Apple the second biggest company in the world, but they still look like idiots to the rest of the +50% of the world.

Yahoo!’s steps

Yahoo has shown us that locking users down, stuffing them with ads and ignoring completely the upgrade of their architecture for years is not a good patho. But Apple (as did Yahoo) thinks they are invulnerable. When Google exploded with their awesome search (I was at Yahoo’s search team at the time), we had a shock. It was not just better than Yahoo’s search, it really worked! Yahoo was afraid of being the copy-cat, so they started walking down other paths and in the end, it never really worked.

Yahoo, that started as a search company, now runs Microsoft’s lame search engine. This is, for me, the utmost proof that they failed miserably. The second biggest thing Yahoo had was email and Google has it better. Portals? Who need portals when you have the whole web at your finger tips with Google search? In the end, Google killed every single Yahoo business, one by one. Apple is following the same path, locking themselves out of the world, just waiting for someone to come with a better and simpler solution that will actually work. And they won’t listen, not even when it’s too late.

Before Yahoo! was IBM. After Apple there will be more. Those that don’t accept reality as it is, that stuck with their old ideas just because it worked so far, are bound to fail. Of course, Steve Jobs made all the money he could, and he’s not worried. As aren’t David Filo or Jerry Young, Bill Gates or Larry Ellison. And this is the crucial part.

Companies fade because great leaders fade. Communities fade when they’re no longer relevant. the Linux community is still very much relevant and won’t fade too soon. And, by its metamorphic nature, it’s very likely that the free, open source community will never die.

Companies better get used to it, and find ways to profit from it. Free, open content is here to stay, and there’s nothing anyone can do to stop that. Being dictators is not helping for the US patent and copyright system, not helping for Microsoft or Intel and definitely won’t help Apple. If they want to stay relevant, they better change soon.

Inefficient Machines

In most of the computers today you have the same basic structure: A computing hardware, composed by millions of transistors, getting data from the surroundings (normally registers) and putting values back (to other registers), and Data storage. Of course, you can have multiple computing hardware (integer, floating point, vectorial, etc) and multiple layers of data storage (registers, caches, main memory, disk, network, etc), but it all boils down to these two basic components.

Between them you have the communication channels, that are responsible for carrying the information back and forth. In most machines, the further you are from the central processing unit, the slower is the channel. So, satellite links will be slower than network cables that will be slower than PCIx, CPU bus, etc. But, in a way, as the whole objective of the computer is to transform data, you must have access to all data storage in the system to have a useful computer.

Not-so-useful

Imagine a machine where you don’t have access to all the data available, but you still depend on that data to do useful computation. What happens is that you have to infer what was the data you needed, or get it from a different path, not direct, but converted into subjective ideas and low-quality patterns, that have, then, to be analysed and matched with previous patterns and almost-random results come from such poor analysis.

This machine, as a whole, is not so useful. A lot less useful than a simple calculator or a laptop, you might think and I’d agree. But that machine also have another twist. The data that cannot be accessed have a way of changing how the CPU behave in unpredictable ways. It can increase the number of transistors, change the width of the communication channels, completely remove or add new peripherals, and so on.

This machine has, in fact, two completely separate execution modes. The short term mode, executed within the inner layer, in which the CPU takes decisions based on its inherent hardware and the information that is far beyond the outer layer, and the long term mode, executed in the outer layer, which can be influenced by the information beyond (plus a few random processes) but never (this is the important bit, never), by the inner layer.

The outer layer

This outer layer change data by itself, it doesn’t need the CPU for anything, the data is, itself, the processing unit. The way external processes act on this layer is what makes it change, in a very (very) slow time scale, especially when compared to the inner layer’s. The inner layer is, in essence, at the mercy of the outer layer.

This machine we’re talking about, sometimes called the ultimate machine, has absolutely nothing of ultimate. We can build computer that can easily access the outer layers of data, change them or even erase them for good as easy as they do with the data in the inner layer.

We, today, can build machines much more well designed that this infamous machine. When comparing designs, our current computers have a much more elaborate, precise and analytical design of a machine, we just need more time to get it to perfection, but it’s of my opinion that we’re already far beyond (in design matters) that of life.

Living machines

Living creatures have brains, the CPU and the inner memory and the body (all the other communication channels and peripherals to the world beyond), and they have genes, the long-term storage that defines how the all the rest is assembled and how it behaves. But living creatures, unlike Lamarck’s beliefs, cannot change their own genes at will. Not yet.

The day humans start changing their own genes (and that’s not too far away), we’ll have perfected the design, and only then we would be able to call it: the ultimate machine. Only then, the design would have been perfect and the machine could, then, evolve.

Writing your own genes would be like giving an application the right to re-write the whole operating system. You rarely see that in a computer system, but that’s only because we’re limited to creating designs similar to ourselves. This is why all CPUs are sequential (even when they’re parallel), because our educational model is sequential (to cope with mass education). This is why our machines don’t self-mend since the beginning, because we don’t.

Self-healing is a complex (and dangerous) subject for us because we don’t have first-hand experience with it, but given the freedom we have when creating machines, it’s complete lack of imagination to not do so. It is a complete waste of time to model intelligent systems as if they were humans, to create artificial life with simple neighbouring rules and to think that automata is only a program that runs alone.

Agile Design

The intelligent design concept was coined by people that understand very little of design and even less about intelligence. The design of life is utterly poor. It wastes too much energy, it provides very little control over the process, it has too many variables and too little real gain in each process.

It is true that, in a hardware point of view, our designs are very bad when compared to nature’s. A chlorophyll is much more efficient than a solar cell, spider webs are much stronger than steel and so on. But the overall design, how the process work and how it gets selected, is just horrible.

If there were creators for our universe, it had to be a good bunch of engineers with no management at all, creating machines at random just because it was cool. There was no central planning, no project, ad-hoc feature emerging and lots of easter eggs. If that’s the image people want to have of a God, so be it. Long live the Agile God, a bunch of nerdy engineers playing with toys.

But design would be the last word I’d use for it…

2010 – Year of what?

Ever since 1995 I hear the same phrase, and ever since 2000 I stopped listening. It was already the year of Linux in 95 for me, so why bother?

But this year is different, and Linux is not the only revolution in town… By the end of last year, the first tera-electronvolt collisions were recorded in the LHC, getting closer to see (or not) the infamous Higgs boson. Now, the NIF reports a massive 700 kilojoules in a 10 billionth of a second laser, that, if it continues on schedule, could lead us to cold fusion!!

The human race is about to finally put the full stop on the standard model and achieve cold fusion by the end of this year, who cares about Linux?!

Well, for one thing, Linux is running all the clusters being used to compute and maintain all those facilities. So, if it were for Microsoft, we’d still be in the stone age…

UPDATE: More news on cold fusion

Start-ups

To start a new idea and make it profitable is much more of an art than logic. There is no recipe, no fail-proof tactic. The most successful entrepreneurs are either lucky or have a good gut-feeling. Hard work, intelligence and the right idea are seldom useful if they don’t come with luck or a crystal ball. After you have started-up, however, they’re the only things that matter.

I may not know how to start a business and succeed, but I do know how to make them fail miserably. I have done it myself and seen many (many) friends fail for different (but similar) reasons. Yet, I still see other friends trying or the same friends still thinking they could’ve done better next time, so this is my message to all of them.

Do you have a crystal ball?

I really meant it, those that really work they way they’re supposed to. If the answer is no, think twice. Seriously, I’m not joking. The only people that partially succeeded were the ones that had nothing to loose, as they had enough money to get them going for years, but (unfortunately) they’re not filthy rich today. The rest are employees somewhere in the world…

Hard work

One thing they all had in common is the idea that they could do it with hard work and a good idea. How wrong they were… Let’s put it simple: if hard work took you anywhere, the world would be dominated by dockers. If good ideas had any impact, the world would be dominated by scientists. But the world is dominated by bankers… Q.E.D.

Working hard won’t help, you have to work just right. That usually means very little in the beginning, a bit more afterwards and later on and finally hire some hard-workers to do the work for you. Simple, stupid.

Picture this: a salesman comes to your door to sell you a pair of scissors. You have many at home, but he assures you it’s the best pair of scissors in the world, that it has twenty patents and the guys behind the design like to work very hard on their ideas. Would you buy it? No! On the other hand, lots and lots of people go to the supermarket and buy scissors just because they’re cheap (and they assume they lost their own).

Don’t expect people to understand your hard work, they couldn’t care less for how much you do work, they just care for what benefits you can give them. The supermarket scissors give them the benefit of being cheap and “be there”, the salesman is already annoying by definition. No matter how good yours is, they simply won’t buy it.

Ingenious crafts

Now, at this point the friends I mentioned are certainly thinking: “but my product was much better. It was new, there wasn’t any one like that in the market”. The truth is, who cares?!

Novelty doesn’t sell, quality doesn’t sell (at least not yours, anyway). If Apple start selling toothbrushes, people will buy by the millions, if you sell a crystal ball that actually works, they’ll ignore completely. Who are you, anyway? Unless they have some kind of value, and their friends (and other posh people) are doing too, they won’t even bother.

If your product is really good, you have to put a high price for it. Poor people won’t buy it, rich people will buy from the fancy brand. You sell it cheap, poor people won’t buy it (because it’s not fancy nor necessary) and rich people won’t even see you. Poor people only buy superfluous stuff from fancy brands (or fakes) and rich people only buy from the real (sometimes fake too!) brand.

If your item is not an every-day necessity, like food, you are in deep trouble. Being the best is not enough, ordinary products sell more than state-of-the-art ingenious crafts.

Do the right way ™

Some of my failed friends (no hard feelings, ok?), that are now really pissed off, are thinking: “But I didn’t put all that effort, and my product was clearly better than any other, and it was for free! How could it go wrong?”. Capitalism 101: No demand, no production.

Don’t yell just yet, when I say demand, I mean demand by desire. There was always a demand for the internet, but people never desired it before a few decades. There was always a demand for a decent search engine, but no one desired that much after all the failed attempts from Yahoo, AltaVista etc. When there was a desire for instant communication, email was not enough, that ICQ had its chance.

Doing it right is not enough, you need to do it at the right time. The right time is not when there is no other option like yours, this fact is irrelevant. The right time is when many others are failing. This, my friends, is the crucial point. You can have a million of ideas, if none of them coincide with the utter failure of one or more other ideas, it’s worthless.

Don’t trust your brain

The recipe for disaster is simple: trust your brain. Trust that your intelligence will lead you to success. Trust that your ideas are better than others’ and that will lead you to success. Trust that hard work wil lead you to success. People that trust their empire is unbreakable, are already breaking. To trust, is to fail.

The most simple rule for success, as I picture it, is to use other people’s failures for your success. If someone is doing it wrong and people are complaining, there is a high demand by desire for that particular thing. If you can identify it and do what they want, it’s likely that you will succeed.

Again, don’t do more than what they need nor better than you have to. Keep it simple, keep it stupid. Hard work won’t lead you anywhere, remember? You have to be fast, noisy and some times ridiculed. It’s part of the game. Good buzz and bad buzz are both buzz, and buzz is good anyhow.

In a nutshell

  • Minimum work, maximum opportunity: Do as little as possible before the window opens, make connections, prepare demos and mockups of several different projects, multiply your chances.
  • Wait for a major failure: Investigate where others are failing and take action immediately, put anything on the market, no matter how ugly or failing, Beta is always Beta (thanks Google!).
  • Don’t let the window close: After you got your opportunity, work hard as hell, buzz, spam, be ridiculous.
  • Don’t use your brains too much: Good ideas are no better than bad ones, your idea is no better than any other. Failing ideas are important, non-existent ideas are irrelevant.

So, my failed friends, it is very simple: You will fail, unless you step on top of other people’s failures and don’t let them do the same to you. Now you understand why I won’t ever try again… This is absolutely not my style at all! I rather have friends than be rich.

A bit of history

Nothing better than a good bit of history to show us how important is people’s failures in other people’s success…

Microsoft’s success

IBM was dealing with Digital Research to put CP/M on it’s new architecture, the PC. Digital was sloppy, negotiations failed, Microsoft (so far completely irrelevant) got a CP/M clone and called it MS-DOS and gave to IBM. You know the rest…

Microsoft had previously worked on a Unix version for micro-computers, called Xenix which was then sold to SCO who ported to PC, which failed. Unix is, as we all know, the best ever operating system in the world. There was no Unix for micro-computers, it was a perfect market, right?

Wrong. The first move (on top of a failure), and not the second (with a bright idea), is what made Microsoft the number one software company in the world today. For bad or worse, they won big time.

Yahoo vs. Google

In the beginning, the internet was a bunch of Gopher sites. When it turned to HTTP and people started using HTML and the commercial boom came in, it was impossible to find anything decent.

Several people started doing directories of cool websites, but it was Yahoo who consolidated it into one big site. They bought several other companies, most notably for their directory contents and search engines. No matter how hard they tried it was still too bad. In 2000 they were to close a search deal with Google. For a short time, Google actually provided search results for Yahoo, but the pride was bigger and they bought Inktomy (who?) and dropped Google’s techology, which obviously brought no value at all to their users.

The search was still no better than Google’s, which saw Yahoo’s pride as their biggest mistake. Google started low, using basically the word of mouth as buzz and making really cool (but stupid, simple and easy to implement) features. Even their search engine was not novelty, as others had done similar in the past and they spent their college time doing it.

Yahoo’s mistake was Google’s take. They now have more than half of the internet passing through them, left Yahoo with second (or third) class, outdated products. The company is now finally destroyed.

To make things even more interesting, Microsoft tried to compete with them, but failed miserably. Their products were even worse than Yahoo’s and, to cement once and for all Yahoo’s mistake, they’re now using Microsoft’s technology as their search platform.

There are obviously many more stories of failures and successes, but I let this as an exercise to the reader. My final and most important point is: commercial success has nothing to do with quality, only with timing and a good deal of bad behaviour.