Bjelkeman's travel notes

Travels with the cloud in my pocket.

Advice to Y-Combinator non-profit startups – part 2 – Core success factors

This is part of a series of blogs called Advice to Y-Combinator non-profit startups. Of course it has wider application than that. Part 1 was some general advice to get this started

#Akvo RSR developer team meeting at Villa Kakelbont
Part of the Akvo RSR developer team, meeting this autumn

There was a discussion on Hacker News about how to make money on open source software, where I chipped in a bit about how we at Akvo do it. I wrote enough to actually make a coherent blog post out of it.

For quite a long time there has been this idea going around that organisations who create open source software should make money on providing software support. This has proven to not be very lucrative for most businesses that try this and other models have been tried, some of which are described in the above discussion.

We took a somewhat different approach with

We noticed the really poor use of Internet systems in international development aid. More than $120 Bn is spent yearly on this, and nobody really has a clue where the money goes. There is no useful overview. So we started building tools to fix that and supply them as a paid for service.

Everything we build is open source software. We have 45+ people working on this, with paying partners such as the World Bank, UNICEF, Liberian government, Mars Chocolate and many hundreds more. It is not your ordinary business model, but it works and we are growing. You can make open source software and earn a decent living.

Core success factors

We believe that our success comes out of a few core things:

  • We brought together domain experts as equals, i.e. people working in international development, water and sanitation issues (our starting market), network organisations, computer software and services, computer software marketing and communications, to solve a problem. This is fairly unusual in both the software and the international development industry.
  • We say our team is a three legged stool. Partner team (more about that below), software team and communications team. If we don’t treat all three equally the stool falls over. We go so far that we think it is imperative to not have an organisation run by the tech or the international development side, but by both sides. So we have until know had two directors of the organisation, one from each domain. Working very closely with the comms director.
  • Maybe most importantly though, we have a very experienced partner team. They have worked in this market for decades. They know “everyone”. We literally have connections to thousands of organisations through our networks and we understand how to talk to those organisations. Our partner team know where all the gremlins are and how the processes work. They know how to get the required startup and expansion investments as well as how to get the big organisations to use our tools.

Non-traditional sales and marketing

About the partner team. We don’t consider us having any customers. We treat all of the organisations that work with us as partners. They then treat us as partners too and it completely changes the relationship when you are trying to solve a problem. Of course it helps that we are a non-profit foundation. We are also not-for-loss. We have a functional business model. This is obviously critical.

In a traditional company our partner team would consist of strategic sales people, account managers, project managers, consultants, trainers etc. We have a partner team that fulfils all those roles, but they are a _partner_ team. Sales are not done on a quota, no bonuses are paid (which often drive really crappy sales in a s/w company) etc.

We have no marketing and PR team. We have a communications team. Most of our staff communicate. Everyone is in fact encouraged and empowered to speak for the organisation. The communications team just supports everyone learning how to communicate well. We hardly do any PR. We may need to increase it, but it mostly does itself based on our peoples open communication.

No bespoke development

Some open source product organisations try to supplement their income by doing bespoke development on top of their product. We don’t as we find that this only distracts you from building a good product. Our revenue comes from hosting, training and implementation consulting services, but not technical implementation services, but implementing the tools themselves within the organisation.

We avoid technical implementation services, as most organisations we work with have a really low internal technical knowledge. If we then take the responsibility for implementing the technical side, we find that they don’t take the ownership of the bigger issues. These are things like learning to publish open data and the changes in culture which this implies in the organisation. Then their failure to embrace the change needed in the organisation is projected as our failure to implement the technical side.

We have technical account management, but require for our partners who implement our tools to either have in-house the required technical skills or hire in the required skills. If they don’t do this, take ownership of the bigger issues and hire competent technical project management and help, we don’t do the projects, as they are very likely to fail. This may sound obvious, but often it isn’t to the partners we work with.

I could write a lot more, but we are creating the Akvo Handbook, which will outline all of this, and be available under an open content license. You can read it all there then. But don’t hesitate to ask any specific questions you may have.

Filed under: Open source, Startup, ,

Advice to Y-Combinator non-profit startups

Y-Combinator is probably the most interesting organisation in the US and EU when it comes to startup funding for digital entrepreneurs. Interestingly enough they have decided to also help non-profit startups. They did an experiment in the last batch with one organisation, Watsi, donation funded healthcare, which apparently turned out well. So Y-Combinator is now taking applications from non-profit teams too as part of their normal application process.

As someone who has been running a successful non-profit digital startup for a number years I thought I’d share some thoughts on some things I think are important to be successful as a non-profit digital startup. Much is the same as running a normal digital startup, so I’ll concentrate on the differences.

First though, for those that don’t know Y-Combinator I’ll let them describe themselves:

In 2005, Y Combinator developed a new model of startup funding. Twice a year we invest a small amount of money ($14-20k + an $80k note) in a large number of startups (most recently 52). The startups move to Silicon Valley for 3 months, during which we work intensively with them to get the company into the best possible shape and refine their pitch to investors. Each cycle culminates in Demo Day, when the startups present to a large audience of investors. But YC doesn’t end on Demo Day. We and the YC alumni network continue to help founders for the life of their company, and beyond.

#wwweek activity in the @Akvo and the @washalliance stand

The Akvo stand at a recent conference, with Peter van der Linde, one of the co-founders of Akvo

Incentives are different

One of the most obvious, and maybe most important, differences with a for-profit company is that you are not doing this to become rich. It may sound obvious, but it changes several important things in an organisation fundamentally.

The people or organisations that invests money in your organisation will still be looking for a return on investment, but often that will return will not be in the form of money. The return can sometimes be hard to define. If you define it well you are well on your way to get investments or grants., of which I am a co-founder, has received several million Euros in grants and investments. We haven’t been asked to provide a board seat for any of our investors, neither have any of our investors any shares in the organisation (it is a group of foundations). This means that we have a great deal of autonomy, but it also means that we have to be very careful how we populate our supervisory boards, with a good balance between understanding our business as well as challenging us. This is not that different from a for-profit board, but I’d advice against having only board members with no experience in the non-profit space. Find board members who have crossed over in either direction with experience from both non-profit and for-profit.


Akvo is non-profit, but we are also not-for-loss. In other words: we charge money for the services which we provide, as we think it is fundamentally unhealthy to rely on grants to survive as an organisation. We still take grants, which are mainly investments to build new products or break into new markets. But each product and market is expected to pay for itself in the long run. This has in fact been been critical for our investors decisions to invest or grant us money.


Because it is not primarily about making money and making you or the investors rich you will find recruitment to be quite different as well. The people that come and work with you will be doing this for the passion of the cause, not the money they can make. People have to eat though, so I’d advice you to pay a decent salary. If you do, then your team will passionately stay with you for a long time.

No customers

We say that we don’t have any customers, only partner organisations. This may sound like a silly point, but we think it is very important. By always talking about those organisations we work with as partners rather than customers, then we think of them differently. Importantly, our partners also think about us differently. We have a much better relationship with our partners as a result.

Open source

We work hard at trying to not duplicate work others have done already. So we use open source software wherever possible, but it is a pragmatic tool, not a religion (i.e. we use GitHub, Disqus etc.). We build all our services as open source software which we run as a service. Most of the services that we operate benefit fundamentally from being run large scale. But our philosophy is that the “door is always open”. If you want to take the software and your data and move somewhere else, you are always welcome to do so. At the same time, the systems are complicated to develop, operate and maintain, and our partners don’t really want to have to do that. They just want to use the systems. As Akvo’s systems gradually are moving into becoming digital governance tools in countries and organisations that use them we fundamentally think it is of critical importance that we supply open source tools. More about that in another blog.


There is definitely competition to our services. However, as we are here to fix a problem, not make money, we are very pragmatic in working together with potential competitors. that see the world like we do. We support each other and try to make sure we don’t overlap in unhealthy ways. We also compete with for-profit companies, which I think is good for both. We keep each other sharp.

Discoverable communications

Something Mark Charmer, who works with me at Akvo, and I have come up with as a concept to describe how we communicate is: discoverable communications. We work in the emerging digital age, use the tools! Nearly everyone in our 45 strong Akvo team uses blogs, twitter, commit messages, video and more, to communicate. Our communications team is there to support everyone else to communicate well, not to be the gatekeepers of information. As a non-profit your reasons to be secretive are a lot less than for a for-profit. Use this advantage ruthlessly.

Of course there is a lot more that can be said about running a non-profit digital startup and if you think this is useful I’ll write some more.

Filed under: Open source, Startup

Open-source developers, you are not innovative, you are expensive and not collaborative

So, is it true that open-source developers are not innovative, are expensive and not collaborative? At least that is the impression I get when I read what Joel Selanikio, CEO and co-founder of DataDyne, wrote recently. He surprised me somewhat the other day when he said:

“Regarding open source, in my experience its promise (ie constant innovation, lowered costs, collaboration, etc) has not been met by the reality.” and then he said:
“The only open-source software that I have ever used regularly is the Firefox browser, though I don’t much anymore.”

XKCD: Someone is wrong on the internet

Brilliant comic pane by XKCD (I have a signed print of this one).

To me, these are a pretty surprising statements coming from someone who has based his business on several open-source products. DataDyne builds and operates the EpiSurveyor service, which uses mobile devices to collect data. DataDyne has just under one hundred paying customers for its services [1] and about nine thousand who use their service which is no-cost up to five thousand data uploads per year (i.e. subsidised by their paying customers).

I was surprised, because two core data collection components of the EpiSurveyor system, namely the EpiSurveyor J2ME (Java) app, is built on the framework of the JavaROSA open-source project, and the Android app “is based on the excellent work done in the ODK project at the University of Washington” (wording from DataDyne’s own writing). Not only that, DataDyne’s web site runs WordPress, which is also open-source.

I think anyone who decides to use JavaROSA, ODK and WordPress to help run his business if wouldn’t do this if s/he didn’t think it was innovative, low cost and great collaboration to build his services on these open-source products. I should probably just stop right here, but I find this kind of attitude too interesting to analyse a bit further. There are quite a lot said about open-source software, which I don’t particularly agree with and this is an example of it. I wanted to discuss some of my thoughts on the subject.

Open-source doesn’t offer constant innovation, lowered costs and collaboration?

The biggest open-source projects of them all is the internet itself. (The internet is without doubt also the most complex interconnected “machine” humans have ever created.) It runs on open standards and protocols and is constantly developed. HTML is the code which is used to markup web pages such that they get structure and layout [2]. The HTML standard is a huge collaborative project. No single organisation owns the HTML standard and it is a constant effort to improve it. It is not always clear what is the best way forward and often something good happens which wasn’t “according to plan”, like HTML5. HTML and its use is a highly collaborative environment, all the code is open (for any web page). You can “View->Source” and see how a particular web page has been assembled. This very open way of working has been a critical part of making the web an enormous success. I think that this is innovative and collaborative…

The web propelled the internet into popularity and has made it possible to get access to all the glory (and gore) of the internet, for as low as US$15/month or free at your local library or school. I think there is overwhelming evidence to support the statement that open-source is offering constant innovation, lowering costs and creates collaboration.

“The only open-source software that I have ever used”

A lot of people don’t think they use any open-source software. I am not sure how anyone working in a mobile phone and web based company would get any work done at all these days, without using open-source software. It even escapes many peoples’ attention that large parts of the smartphone operating system Android is open-source.

Even the more secretive company Apple, has release the core of the Apple Safari web browser, WebKit, open source (which was based on KHTML and only released openly after the community applied some pressure on Apple. Thanks to @peppelorum for reminding me). In fact nearly 40% of web traffic comes from WebKit based browsers these days and 60% from all open-source based browsers [3]

It is hard to do anything on the web without using open-source software considering that the majority of the web servers in the world are open-source. The open-source web servers Apache and nginx together account for more than 70% of the world’s web servers. [4]

Joel also said that:

“The only open-source software that I have ever used regularly is the Firefox browser, though I don’t much anymore. I believe that this is because of the poor business models for open source products: if you give your software away, you need to charge for something. For open-source software that either means training and consulting (which means you are not incentivized to reduce the training/consulting requirement for your software) or grant support (in which case you are more focused on grantors than users).”

Poor business models are often brought up in these type of discussions around open-source software A lot of people seem to never have heard about RedHat, a billion dollar business with US$146 million in net income in 2012 and 3,700 employees. Or Canonical (US$30 million/400 employees), or Automattic (who makes WordPress) which has 115 employees and more monthly unique visitors to their web site than Amazon (IMHO a good business model is more than just the dollars). Neither of these companies rely on grants, and “only” training and consulting for their income.

To be fair with Joel, in a blog post, which he links in the discussion, he says, in a footnote, that all this is about: ” ‘user-facing software': software designed for regular people to use. I am not talking about back-end, programmer-facing software — a field in which open-source has made significant and ongoing and innovative contributions.” Which is a bit like putting your caveat at “the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.” [5]

But he posts all these statements in a discussion, which is about promoting his company vs. some open source software (which I work with and which is for “regular people”) to essentially disparage open-source with these “regular people”.

This attitude makes me a bit sad. And I think that is a shame, as they make products for a good cause.

[1] Based on data from their web site and information in that LinkedIn post. 8965 users (15 Aug 2012, 16.21 CEST) and he says “less than 1% pays for the service”.
[2] There are more components, like CSS, JavaScript and more, but they are largely handled the same was as I describe HTML above.
[3] Chrome (30.06%), Firefox (21.01%), Safari (9.10%), Android (2.57%) are all open-source based and make up 63.28% of all browser usage.

[4] The web servers Apache (59.39%) and nginx (11.53%) together account for more than two thirds of the world’s web servers (70.92%).

[5] Extra points, without search engine, for identifying who said that.

Edit: 16 Aug 2012, 16.12 CEST, added piece about KHTML.

Filed under: Open source

The future we deserve – 100 essays about the future, #theFWD

Vinay Gupta (@leashless) said on Twitter:

“I’m putting together a book called The Future We Deserve, open brief, 500 words, sign up at”. [1]

The future we deserve - 100 essays about the future (picture: Dragonfly, by Maria Elvorith)

The future we deserve - 100 essays about the future (picture: Dragonfly, by Maria Elvorith)”

The book is 100 essays about the future, and sometimes about the now, written by nearly as many authors. It’s inspiring, scary, fun, thoughtful.

If someone else had asked I probably would have ignored it, but I often have very engaging conversations with Vinay, in a mad scientist kind-of-way. I would like to say that this is because I like the way he thinks, but that feels… gruesome, or actually, maybe that is correct. I do like the way he thinks, it is the outcomes of his thinking which is pretty gruesome and one of the essays I wrote for #theFWD address just that. Gruesome outcomes, which very few people like to think about.

I started to write a few things which have been bumping around in my life for a while. But I think I overdid it. I actually wrote four essays and to my surprise, the editors included them all. The book is available to read on Appropedia, as well as a PDF download, but if you are like me then you may want to actually buy a hardcopy of it.

[1] I love Twitter, but at the same time I have a hard time using it. You can’t find really old tweets easily, the archive browsing is slow and cumbersome. Otherwise I would have linked to it.

Filed under: Books, Crowd-sourcing, Social and economic policy, , , , ,, Swedish development aid transparency

This was originally posted on the Open for Change blog screenshot

Today I attended the launch of the new aid transparency effort,, which is a joint effort between the Swedish Ministry of Foreign Affairs and the Swedish International Development Cooperation Agency (SIDA), to show where Swedish government development aid money is going. The Swedish minister for Development Cooperation, Gunilla Carlsson, presented the effort and went into some depth to describe the work.

I together with Akvo was asked by the Swedish Foreign Ministry to review and give feedback on the site before the launch. I was also part of a review panel which discussed the work after the presentation together with a very engaged audience.

I think is a very good effort to start showing the Swedish aid budget. The team working on this clearly were very passionate about this work and has put in a lot of effort bringing both budgets and thousands of documents visible online. We would like to commend everyone involved on a great start.

openaid-tv picture

To see the video of the launch event, click the above picture. The panel, which I was part of starts at 34 minuts in.

Read the rest of this entry »

Filed under: Development aid, Open source

Heroes: Becky Straw at The AdVenture Project

Adventure Project World Water Day

The Adventure Project was co-founded by Beck Straw, a friend and hero of mine. The Adventure Project is a non-profit organization established last year to increase investments in positive social enterprises around the world.

During for World Water Day they pledged, with the help of a whole bunch of bloggers, to help raise money for a particular water project. I promised to help them, but I was sick and didn’t manage to get my pen out yesterday. But I will write about them anyway, as I really like what they do.

Becky used to work for Charity: Water before she started the Adventure Project. I met her in Istanbul at the World Water Forum, where she presented at the “Thinking outside the water box session” the session before me.

The next thing I am going to do, is click on the banner above, and donate money to the project they are supporting in India. The money is used to train and employ handpump mechanics. And if you like me, have been in India, then you know there is a big pent-up demand for that. Lots of broken hand pumps.

To wrap it up, here is a short video which @charmermark filmed of Becky, when she worked with Charity: Water, together with Ian Thorpe, then at PumpAid.

Filed under: Crowd-sourcing, Heroes

Why more nuclear power does not make any sense

Nuclear power plant Mochovce

Picture by Michal Brcak.

I write this without holding any illusions that anyone will actually read this, nor do I expect to convert anyone. I just need to get it out of my system. So there you have it.

Safety of operating plants

According to Guardian Data there are 442 operating nuclear power plants in the world. On average they have been in operation for 26 years. [1] There are also some 66 reactors which have been shut down, or decommissioned for some reason. In total these plants have operated a little over 13,000 years together. [2]
The International Atomic Energy Authority, ranks nuclear accidents, on a scaled called INES, from 1-7 (anomaly to major accident) and rank 4-7 are classified as accidents. Again, according to Guardian Data, the accidents with wider consequences (5-7) have been six.

1952, Chalk River, Canada, INES 5
1957, Windscale Pile, UK, INES 5
1957, Kyshtym, Russia, INES 6
1979, Three Mile Island, 1979, INES 5
1986, Chernobyl, INES 7
2011, Fukashima, INES 5

In total (according to the Guardian) there have been 33 recorded serious incidents and accidents involving nuclear power. So with 13,300 operating years, we have had one serious accident or incident per 405 operating year, and a level 5-7 accident (like the ongoing Japanese accident) every 2230 operating year.

With 442 plants in operation, if we follow the same accident frequency, we will have serious incident or accident every year. We will also have an accident of the level of Fukashima every five years.

It could be argued that things are getting safer, as we have only had three big accidents since 1957. But half of the incidents recorded by Guardian Data have happened since Chernobyl.

According to World Nuclear Association (WNA) there are 62 nuclear power plants under construction, 158 on order or being planned and a further 324 plants are proposed. The WNA also suggests that at least 60 plants of the current operating, will shut down by 2030. Which would leave us with 926 nuclear power plants operating.

If the failure rate stays the same we would have a nuclear power plant incident every five months and a INES 5-7 incident every two and a half years.

Why would the failure rate stay the same?

There is a clear track record of safety failure in the nuclear industry in several countries. Here are a couple of examples:

“The unfolding disaster at the Fukushima nuclear plant follows decades of falsified safety reports, fatal accidents and underestimated earthquake risk in Japan’s atomic power industry.” Bloomberg, 18 March 2011.

“State-owned Swedish energy concern Vattenfall has admitted serious security deficiencies at its controversial Forsmark nuclear power plant.” Power-Gen Worldwide, 12 February 2007

“Between 1950 and 2000 there have been 21 serious incidents or accidents involving some off-site radiological releases that merited a rating on the International Nuclear Event Scale, one at level 5, five at level 4 and fifteen at level 3.” Sellafield article, Wikipedia.

If countries like Japan, Sweden and the UK can not make its nuclear power operators follow safety protocols, where do you expect it to work better?

But there are other reason why we should question nuclear power.

Safety of storage

As a trained geologist I actually think spent nuclear fuel storage can be solved reasonably well. However, essentially nobody wants it in their backyard and nobody has actually started storing spent fuel yet.

“Finland plans to have a long-term waste repository operational in 2020, Sweden in 2023 and France in 2025.”

In Scandinavia we have relatively good and stable granite bedrock to store spent fuel in, but where is the rest of the waste going to go from 440-900 nuclear power stations? Maybe some poor country with good bedrock will become the the nuclear waste dump of the world. Sounds great.

“A draft EU directive presented on Wednesday calls for national plans to be drawn up in the next few years, as the EU still has no final storage sites for nuclear waste.” BBC News, 3 November 2010.

All the spent nuclear fuel waste in the world is currently in short term storage. Like the storage which may be causing trouble in Japan at the moment.

“The Nuclear Regulatory Commission estimates that many of the nuclear power plants in the United States will be out of room in their spent fuel pools by 2015, most likely requiring the use of temporary storage of some kind.” US Nuclear Regulatory Commission

Great idea.

Nuclear proliferation

More nuclear power plants mean more nuclear weapons. The ongoing debacle with Iran and a state barely in control of itself, Pakistan, and one on the brink of collapse, North Korea, with nuclear weapons, I believe, is just the beginning of nuclear proliferation, if we keep depending on nuclear power for our energy needs.

At some point nuclear weapons will be used. If the attackers of 9/11 had had access to a nuclear weapon, do you think they would have refrained from using it?

Complexity of nuclear power

If you invest a lot of money in more nuclear power plants, you can’t take any of that and give to a family in a failed state, like Somalia, to help fix there power shortages. But if you rather invest it in cheap solar power, like Nanosolar or First Solar, you can even sell a Somalian a power plant, at the family level, without a major risk to them, their surrounding or the environment, and it is simple enough for even my old grand mother to operate.

The majority of the increased power need in the world is in the countries which are not well developed and it would be foolish to believe that we could help them by building and operating a nuclear power plant. In fact you can’t run an operate a nuclear power plant unless you have sophisticated infrastructure, in the shape of a functional government, national administration, education and technology, so it is no real help for the developing world.

That Pakistan and North Korea has nuclear power is irrelevant in this context, as they only have it to produce weapons grade plutonium. The importance of nuclear energy for them is less than secondary.

Uranium mining

Uranium mining is one of the nastiest businesses in the whole mining industry. The environmental impact is big. We used to mine uranium in Sweden, but this was discontinued, we now like the rest of Europe, buy our uranium from other countries, such as Australia, where the mines are in the outback, out of sight, out of mind.

Nuclear is CO2 free

Whilst it is true that an operating nuclear power plant doesn’t emit much CO2, it does when you take the whole lifecycle into account: mining uranium, building and decommissioning the plant.

“However, nuclear emits twice as much carbon as solar photovoltaic, at 32 gCO2e/kWh, and six times as much as onshore wind farms, at 10 gCO2e/kWh. “A number in the 60s puts it well below natural gas, oil, coal and even clean-coal technologies. On the other hand, things like energy efficiency, and some of the cheaper renewables are a factor of six better. So for every dollar you spend on nuclear, you could have saved five or six times as much carbon with efficiency, or wind farms,” Nuclear energy, assessing the emissions, Nature, 24 September 2008.


You will sometimes hear the term baseload and also hear that nuclear power plants are needed to provide baseload power. Baseload is what people call the power we need “regardless of whether the sun shines or the wind blows”.

An overview of why this is wrong can be read at Do we need nuclear and coal plants for baseload power? by David Roberts and a more detailed description of why, you can be read Amory Lovins, Four Nuclear Myths, Rocky Mountain Institute, 13 October 2009.

Peak uranium

Something often overlooked is that there may not be as much uranium around at the required price, as the nuclear industry would like. My take is that there is probably enough fuel for the 900 plants, which the maximum expected by the nuclear industry over the next 30 years, specifically as the fuel isn’t a significant cost of the building of a plant, i.e. a nuclear power plant is relatively price insensitive to higher nuclear fuel prices. More at Uranium Depletion and Nuclear Power: Are We at Peak Uranium?”, The Oil Drum, 21 March 2007.

Cost of nuclear power

“The Union of Concerned Scientists recently reported that nuclear subsidies total nearly 7 cents per kWh, twice what a typical wind power plant receives and similar to the federal incentives offered for solar power.” Nuclear Power, Still not viable without subsidies, Union of Concerned Scientists, February 2011 [PDF file]

This article at The Grist, is a good overview: Cost, not Japan crisis, should scrub nuclear power. Specifically you should note the following quote:

“In the time it would take to build a nuclear plant (6-8 years, optimistically), every commercial energy technology could produce electricity for less.”

In other words, the cost of building energy systems on wind power, solar, biofuel, small scale hydro and other renewable energy systems will most likely have caught up with nuclear before you can complete a new nuclear power plant.

In the UK the nuclear fuel industry refuses to build any new plants without huge government loan guarantees.

Fourth generation nuclear power plants

Another argument which often comes up, is that the next generation of nuclear power plants will “improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants.”

But these supposedly improved nuclear power plant designs are paper tigers.

Other than one design, which could theoretically be available in the first implementation during the mid-2020s, are just research projects today and could earliest enter production during the 2030s. And if you have studied any climate science at all, you know that pouring billions into uncertain, centralized, expensive, nuclear power station projects, is not what we need right now. Essentially no new nuclear power plant is ever delivered on budget. These new research projects are bound to cost a lot more than what is presented right now. (If you can find any estimates at all. I didn’t.)

The severe difficulties of Finland’s Olkiluoto nuclear reactor being built by Areva SA, the French state-owned nuclear construction firm, provide a reminder of how these problems unfold. Touted as the turnkey project to replace the aging cohort of nuclear reactors, the project has fallen three years behind schedule and more than 50% over budget. The delay has caused the sponsors of the project to face the problem of purchasing expensive replacement power; the costs of which they are trying to recover from the reactor builder. The cost overruns and the cost of replacement power could more than double the cost of the reactor.” The economics of nuclear reactors: Renaissance or relapse?, Mark Cooper, Senior fellow for Economic analysis, Institute for energy and the environment, Vermont law school, June 2009

And that is not even a fourth generation design.

Another example of this type of argument was sent to me yesterday: “On energy and the end of civilization“, Warren D. Smith, 2001. Where the author lays out an argument for showing us that fossil fuels will be too expensive to use in the next 30-70 years, including uranium, excluding coal. (Note that this was written before the understanding of peak fossil fuel was as it is today, where we actually have hit peak oil, but that is a different blog post, one day.) Then he argues that solar wont work, as it is too hard, and the solution is … breeder reactors.

These nuclear reactors enable the use of U-238 (converted by neutron irradiation into fissile Pu-239) and Th-232 (converted to fissile U-233) as fuel, not just the (far rarer) U-235. This will enable energy production at current rates for 1000s of years using only known reserves of Thorium and Uranium.
Breeder reactors work. One was in large scale commercial use in France… only problem is: “in June 1997 France said it would scrap their highly controversial $4.7B Superphenix nuclear fast-breeder, saying it was too costly and of doubtful value.” A French govt report in 1996 concluded it had cost the state $12B. The planned shutdown in 2005 will cost $20B more. This was the world’s largest fast-breeder but it had managed to operate for only 6 months through 1997 since it began generating power in 1985. Oops. France’s electricity is 80% nuclear due to French leadership thinking it had no other choice.
There had been a major sodium leak at Superphenix in 1987 but it had re-begun operating in 1994 after a 4-year layoff. Britain simiarly had closed its Dounreay fast-breeder in 1995. The US operated an experimental fast breeder at Shippingport Atomic Power Station in the 1970s and early 1980s. The reactor had a core that was designed to produce Uranium-233 from Thorium-232. Although it showed no signs of ending its useful life, the experiment was ended due to budgetary concerns and interest in analyzing the core to see if breeding had occurred. When analyzed, the core indeed contained 1.3% more fuel than it had originally contained.
Japan in Dec 1995 shut down their Monju fast breeder, which took 12 years and $4.91 billion to build, after a massive coolant sodium (very flammable!) leak. There was a furor over cover-ups of the incident with doctored videos and incomplete reports.

Breeder reactors are also interesting as it is supposedly easier to produce weapons grade plutonium in them. The person who sent this to me says this is our last hope.

Sounds great doesn’t it?


The nuclear lobby thinks we need to overlook the faults of nuclear power. They want us to accept regular catastrophic failure, nuclear weapons proliferation, the unsolved problem of final spent fuel storage, the fact that investing in nuclear power doesn’t help the worlds 2 billion poor, despite that it emits more CO2 than renewables and it really messes up the environment when you mine uranium.

They want us to invest in nuclear power because “There Is No Alternative” and they argue that nuclear power is a cost effective solution. But it isn’t.

So what is left of the argument? Nothing.


[1] There were three plants which didn’t have a start date for the operations, so I gave them the average operating lifetime. I calculated the start time by deducting the start year from todays year, i.e. 2011-Start year=Years in operation. Median for the number of years in operation is also 26.

[2] I assume the decommissioned plants had operated for 26 years, as I have no data for their operations. I have also ignored the time when the plants are down for maintenance.

Edit: Changed the title to “Why more nuclear power doesn’t make any sense” from “Why nuclear power doesn’t make any sense” as this was more in line with my intent of the article. I am not of the opinion that we should decommission nuclear power plants before their end-of-life, to replace them with fossil fuel power plants.

Edit 2: Added the section on fourth generation nuclear power plants.

Filed under: Climate Change, Facts, Social and economic policy

Law is hard. Code is harder. Why new internet and software architecture will define the future of society

From left: Lawrence Lessig, Vinay Gupta, Srikant Nadhamuni. Picture of VInay by @charmermar, the other two by me.

Something which Vinay Gupta said the other day brought together several strands in my head. Vinay called it Foreign Policy by Internet Protocol. It is short enough to be quoted in full:

Foreign Policy by Internet Protocol
1. 5.1 billion cell phones, soon to be 7 billion smart phones on 3G networks
2. increasingly valuable services delivered over international borders, like Google
3. global shared knowledge bases like wikipedia or satellite maps
4. telemedicine, tele-engineering, micro-consultancy, social media and so on as the tools spread into new areas of life

Non-state actors conducting FPIP include WikiLeaks, Appropedia and many other groups. Currently it’s not at all clear that any state has begun to effectively deliver FPIP.

Vinay Gupta, Foreign Policy by Internet Protocol (2011) [1]

If you combine the thought that our communications infrastructure is going to start dictate how we think about the world with what Laurence Lessig says: “The Code is the Law”. Then a number of things which are going on in the world today can be seen in a very different perspective than what you see in your average newspaper opinion piece.

Of course, Lessig was years ahead of me thinking about this, in his piece The Code is the Law from 1999 he says [2]:

“The single most significant change in the politics of cyberspace is the coming of age of this simple idea: The code is law. The architectures of cyberspace are as important as the law in defining and defeating the liberties of the Net.”
Lawrence Lessig, The Code is the Law (1999)

The Code is the Law

Consider the example of copying of copyrighted works. You break a multitude of rules and laws if you copy a copyrighted work. Some countries are trying to implement some pretty draconian laws to stop copying over the internet, like the three strikes and your are cut off laws [3], which are met with quite a lot of resistance at the moment. But that hasn’t really stopped anyone from actually breaking these laws. The flow of information over peer-to-peer (P2P) networks is increasing and new laws seem to have short term effect on peoples behavior [4].

Google holds billions of images on their giant server farms with caches of images from web sites. According to the letter of the law they are breaking the copyright law when doing that. YouTube’s HTML 5 trials made it possible to download every video on YouTube to your computer (they seem to have disabled that again) and there is an enormous amount of material which breaks the copyright laws and rules on YouTube. There are some ongoing big lawsuits against Google, who owns YouTube, but in essence, for most people and companies YouTube is more useful than it is a threat, despite what the law says. Add to this that the function of the internet requires that you make a copy of a web page or a picture to actually view it on your computer, and it is trivial to copy it from the web browser cache to save it for later.

In short, the architecture of the internet has a stronger influence on how people behave than what the law says, as long as the majority of the people see a significant benefit.

The extension of this is that software architecture starts defining how our society behaves. Furthermore, I think that internet architects and coders who build useful systems, may in the long run, have a bigger influence on our future society than politicians and the traditional power-brokers have. Why do I believe this?

Read the rest of this entry »

Filed under: India, ITC technology, Social and economic policy

Identifying more than a billion Indians, another take on Gov 2.0

Image: Gireesh G V for Forbes India

Srikant Nadhamuni, tech lead for the Indian UID project. Image: Gireesh G V for Forbes India

The Indian UID project is very interesting to me, as the work they are doing is done on an enormous scale. There are other systems which reach this scale, and arguably are more complex than this (Facebook for example), but it is still impressive.

“By 2014, the government wants half of India’s population to be allotted UID numbers. To do that, the Authority will photograph a staggering 600 million Indians, scan 1.2 billion irises, collect six billion fingerprints and record 600 million addresses.”

Read more in this rather good Forbes India article. Another article about this was published on the Economist yesterday (although together with my friend Gabriel I am still pondering what the 14 billion transactions per second actually mean).

Whilst a country like Sweden, where I live, is struggling with a hodge-podge of identification services to be used online as well as offline, India isn’t only going to launch an online system of staggering scale, it is also going to leapfrog our old systems in a giant leap. Once they are up to speed with issuing IDs they could issue biometric IDs to the Swedish population in just over a week. At peak they expect to issue 1 million IDs per day.

Srikanth and my wife Anke taking a break during the bicycle ride on the outskirts of Bengaluru, buying some coconuts from a street vendor. January 2010.

A friend of mine, Srikanth Nadhamuni, leads the technical development from the Indian government side and it is really rather interesting to talk about the implications for this system with him.

One aspect which doesn’t get much coverage is that they are going to use the UID system to facilitate very inexpensive money transfers for people. This is in a country where a lot of people, maybe even most of them (hundreds of millions of people) don’t actually have a bank account at all today.

Another aspect which is interesting is that the team started the development in a way which would be very familiar to many Hacker News readers. They worked out of an apartment in Bangalore, where several team members lived as well as worked, in a true startup atmosphere. Software companies, like MS and Goggle would show up with teams and end up sitting around the kitchen table or on the spare bench from the hallway to participate in sessions where the project was being discussed.

They have software volunteers, expat-Indians, coming in from all over the world to work on the project, the top level people behave just like any other software startup entrepreneur you would expect, sitting up to 4am in the morning doing code reviews, walking into a room and asking: How’s it going? Not the usual bureaucratic India you would expect.

If I wasn’t working on what I work on right now I would probably have been a volunteer on the project myself, if they would have had me that is. :)

Edit: I have written about the UID project before, but it was quite short.

Filed under: India, ITC technology, Social and economic policy

Governance is the last mile problem

Picture by Mark Charmer

Yesterday I had the privilege to spend several hours with Sunita Nadhamuni. We had a lot to talk about, as we hadn’t met since the summer. Sunita sits on the board of as well as running one of my favorite organisations in development aid, Arghyam, which means I am lucky enough to be able to book some time with her and make it seem legitimate.

As usual the the topics of discussion ranged far across the board, but what really made me grab for a notebook to quickly scribble down a quote was something she said when we were discussing fundamentals around development aid. Sunita said:

“Governance is the last mile problem.” – Sunita Nadhamuni

The last mile is an expression often used in the internet and telecommunications business when discussing how to get people connected to telephone or internet services. There often is sophisticated communications infrastructure available locally, but often no money, or rather a perception that it is too expensive to get everybody hooked up. The investment in the required “last mile” connection is often unpalatable, but without it there is no point of building of the infrastructure in the first place. It may be easiest to understand the challenge economically for a northerner, like myself, when looking at it in an example: The single biggest cost in transporting food to your table is not where one would expect it to be, to the supermarket, but from the supermarket to your home, i.e. the last mile. 1

But back to development aid. In our discussions yesterday we noted that in the segment of development aid in which we work, water and sanitation, there seem to be a particular challenge in getting these services deployed on the ground related to the local situation. It doesn’t matter if the national or state government sets goals, understands the problem and sends out decrees, if there is no capacity locally to both understanding the problem and to understand how to approach solving it. The most successful efforts around solving water and sanitation problems (and I believe this applies to education, healthcare and other areas as well), are when you manage to engage the local community to the point where the local community not only understands the problem, but owns the solution. It doesn’t matter how many NGOs there are who works with the issue in a country like India, or anywhere else for that matter, if you can’t successfully get the local community to engage with the problem. In countries or regions which have functioning water and sanitation systems the solution nearly exclusively involves the local community and the local government.

In communities which do not have these services the main problem is not what technology to use, or how to build it, or who should be responsible or own it, but a matter of getting people to sit down together and discuss the issue and working together to solve the problem. It is nearly always a matter of governance.

At university I spent four years studying environmental problems and water related issues, but I only had three (3!) days learning about governance. When discussing water and sanitation issues there seem to be no end to the discussions about what technology to use, is access to clean water a human right or not, and government policy on the subject. But good examples to learn from how to make it work locally are harder to come by, or maybe harder to share, as the context in a successful solution is often what I would call “hyper-local”. In India, central government actually seems well aware of that the solution should local, but until now it seems to have had a hard time translating it into action. This may be about to change. Arghyam is currently working with the Indian central government in reviewing the current progress of the five year plan and planning the next five years, and this time, possibly for the first time, there is organized feedback from the grass roots level. Hundreds of participants from the gram panchayat level of government (village council) are participating and collaborating with other participants from state and national level to give feedback on the central government plans.

Technically we know what to do. The money is there to do it. The challenge is to engage people in an open discussion to make it happen. It is democracy. Governance is the last mile problem in water and sanitation. When you make that work the rest is easy.


1. The Validity of Food Miles as an Indicator of Sustainable Development, DEFRA, July 2005

Edited: 13 January 2011, fixed a spelling error. Thanks to @PraveenaSridhar for finding it.

Filed under: Arghyam, Development aid, India


Get every new post delivered to your Inbox.