The future of programming

Unraveling what programming will need for the next 10 years.

Programming is changing. The PC era is coming to an end, and software developers now work with an explosion of devices, job functions, and problems that need different approaches from the single machine era. In our age of exploding data, the ability to do some kind of programming is increasingly important to every job, and programming is no longer the sole preserve of an engineering priesthood.

Is your next program for one of these? Photo credit: Steve Lodefink/Flickr.

Is your next program for one of these?
Photo credit: Steve Lodefink/Flickr.

Over the course of the next few months, I’m looking to chart the ways in which programming is evolving, and the factors that are affecting it. This article captures a few of those forces, and I welcome comment and collaboration on how you think things are changing.

Where am I headed with this line of inquiry? The goal is to be able to describe the essential skills that programmers need for the coming decade, the places they should focus their learning, and differentiating between short term trends and long term shifts.

Distributed computing

The “normal” environment in which coding happens today is quite different from that of a decade ago. Given targets such as web applications, mobile and big data, the notion that a program only involves a single computer has disappeared. For the programmer, that means we must grapple with problems such as concurrency, locking, asynchronicity, network communication and protocols. Even the most basic of web programming will lead you to familiarity with concepts such as caching.

Because of these pressures we see phenomena at different levels in the computing stack. At a high level, cloud computing seeks to mitigate the hassle of maintaining multiple machines and their network; at the application development level, frameworks try to embody familiar patterns and abstract away tiresome detail; and at the language level, concurrency and networking computing is made simpler by the features offered by languages such as Go or Scala.

Device computing

Look around your home. There are processors and programming in most every electronic device you have, which certainly puts your computer in a small minority. Not everybody will be engaged in programming for embedded devices, but many developers will certainly have to learn what it is to develop for a mobile phone. And in the not so distant future carsdrones, glasses and smart dust.

Even within more traditional computing, the rise of the GPU array as an advanced data crunching coprocessor needs non-traditional programming. Different form factors require different programming approaches. Hobbyists and prototypers alike are bringing hardware to life with Arduino and Processing.

Languages and programmers must respond to issues previously the domain of specialists, such as low memory and CPU speeds, power consumption, radio communication, hard and soft real-time requirements.

Data computing

The prevailing form of programming today, object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior, yet the rigors of C++ or Java require developers to worry about how it is accessed.

As data and its analysis grow in importance, there’s a corresponding rise in use and popularity of languages that treat data as a first class citizen. Obviously, statistical languages such as R are rising on this tide, but within general purpose programming there’s a bias to languages such as Python or Clojure, which make data easier to manipulate.

Democratized computing

More people than ever are programming. These smart, uncounted, accidental developers wrangle magic in Excel macros, craft JavaScript and glue stuff together with web services such as IFTTT or Zapier. Quite reasonably, they know little about software development, and aren’t interested in it either.

However, many of these casual programmers will find it easy to generate a mess and get into trouble, all while only really wanting to get things done. At best, this is annoying, at worst, a liability for employers. What’s more, it’s not the programmer’s fault.

How can providers of programmable environments serve the “accidental developer” better? Do we need new languages, better frameworks in existing languages? Is it an educational concern? Is it even a problem at all, or just life?

There are hints towards a different future from Bret Victor’s work, and projects such as Scratch and Light Table.

Dangerous computing

Finally, it’s worth examining the house of cards we’re building with our current approach to software development. The problem is simple: the brain can only fit so much inside it. To be a programmer today, you need to be able to execute the program you’re writing inside your head.

When the problem space gets too big, our reaction is to write a framework that makes the problem space smaller again. And so we have operating systems that run on top of CPUs. Libraries and user interfaces that run on top of operating systems. Application frameworks that run on top of those libraries. Web browsers that run on top of those. JavaScript that runs on top of browsers. JavaScript libraries that run on top of JavaScript. And we know it won’t stop there.

We’re like ambitious waiters stacking one teacup on top of the other. Right now, it looks pretty wobbly. We’re making faster and more powerful CPUs, but getting the same kind of subjective application performance that we did a decade ago. Security holes emerge in frameworks that put large numbers of systems at risk.

Why should we use computers like this, simultaneously building a house of cards and confining computing power to that which the programmer can fit in their head? Is there a way to hit reset on this view of software?


I’ll be considering these trends and more as I look into the future of programming. If you have experience or viewpoints, or are working on research to do things radically differently, I’d love to hear from you. Please leave a comment on this article, or get in touch.

tags: , , ,
  • What a great framework you’ve laid out for exploration, Edd! I can’t wait for future installments in this series.

    In the meantime, what O’Reilly books would you recommend to get a head start in each of these areas?

  • John buju

    Great research but only time will tell. Thanks

  • Very interesting post, thanks for sharing your thoughts! Making the problem space smaller and building on top reminds me the chains of meaning in the STEPS project by Viewpoints Research Institute (

    • Thanks! I think my underlying sentiment here is that we should be at the age of telling computers what we want, not what to do.

      • I agree with this sentiment, Edd and it prompts inclusion of this brief philosophical module. Sometimes we need to deploy the seemingly obtuse and hyperbolic in order to shake loose the institutionally entrenched and myopic.

        I’d submit that the aforementioned wants and needs should be HUMAN wants and needs. Think, “Good morning #GlobalBrain, we can haz #AbundanceAlgorithm to #EndPoverty today? tyvm. mkay bai.” For the more tactically minded, put work into optimizing MatterNet logistics, for example.

        Contemplate #OpenGov and #GovAsAlgorithm as abstraction layers for accomplishing #EarthOS optimization for flourishing sentience. I say flourishing sentience instead of human flourishing because, as a species, we’re not entitled to continuously deplete or destroy other species’s resources or environments for our own self-indulgent cushy creature comforts.

        The era of maximizing growth is over. Welcome to the era of maximizing good. The era of maximizing competitive abilities is over. Welcome to the era of maximizing cooperative capacities. The era of fixating on means is over. Welcome to the era of accomplishing ends.

        These ideas may seem too abstract and off topic for some. I’d submit that such ideas are actually at the core of why we’re programmers in the first place and that our universe is far weirder and more wonderful than the average meatbot dares imagine.

        Consider Leornard Susskind’s take on the Universe as Hologram. Susskind reminds us of Sherlock Holme’s approach that, regardless of how seemingly unlikely, if we remove all the impossibilities, whatever remains is the truth.

        See you on the other side, ye good fellow users & programs

      • I agree that we need some kind of declarative APIs for end users. But also we need “hackers” to programm all of the mid-level abstractions. It is interesting that Open Source libraries and APIs when being used as building blocks for big systems making it’s easier to go deeper if you understand particular problem spaces (no need to understand everything).

  • I like this post… I have a long running Java effort to address all of the above and the backing principle has always been to relentless pursue a model of development that matches the future. In my case by not creating a new language. I’ll try not to be spammy, but click my name and it will forward to it. I’ve been working on a major evolution of my efforts since the day Android / G1 hit my hands as I immediately realized the pain ahead in regard to device / OS fragmentation (the real kind; IE bugs between OS versions) let alone proliferation beyond phones / tablets.

    I’d be glad to chat about all of this in detail, but I’ll try to keep things short for now; you got me excited though… Oops, not so short… ;)

    Data computing:
    Most definitely traditional OOP has thrown a wrench in how data is mixed w/ behavior and accessed in addition to making things difficult when it comes to long term data schema storage. In the context of Java / high level languages the approach that is most useful is high level data oriented design. This entails explicitly separating data from logic and the best way forward to do this at least w/ Java is a component oriented approach where one creates data components that just store data and logic components that act upon data components.

    A component oriented approach avoids inheritance (is-a relationship) as the core organization principle. With my efforts there is a lightweight query interface that allows containers (ComponentManager) to store any combination of components; data or logic. The goal is to focus on the _implicit_ has-a relationship / composition. This is OOPs hidden last stand and not possible without a specialized API for the purpose. Due to the necessity of a specialized API though I at least am tempted to call it a new paradigm, COP (component oriented programming).

    A benefit of just storing data in a separate component is that it becomes both schema and storage rolled into one that can be versioned if necessary. Ideally one designs data components with forethought to not require additional versioning. Perhaps, creating an additional data component for new data to be subsequently composed w/ versioning as a fallback.

    A final benefit of a high level data oriented approach is that it opens up generic recycling. A ComponentManager is recycled and all its individual components are split up and recycled. A game example that makes quick sense is an enemy entity dying, it is recycled, the Position data component (IE x, y, z) is subsequently recycled and can be composed into a new entity say a different type of enemy or any other type of entity (player, a tree, bullet, etc.). The way I like to envision this is “derezzing” ala Tron and reconstitution. This general approach fits game development particularly well and is gaining momentum, but if done right and the component oriented API is a superset to an entity system then it applies to any category of app development.

    Distributed computing:
    High level data oriented design opens the door to in-process multithreading (re concurrency, locking, asynchronicity). Need to share data between threads; don’t lock, don’t synchronize. Make a copy of a data component which pulls from the recycler if possible before creating a new object and simply marshal it across threads. There is no need for DAO (data access object) or serialization as data components are already DAOs in and of themselves!

    Dangerous computing:
    The kitchen sink SDK / framework (cough Android) is dead; long live the kitchen sink…. No, kill it w/ fire ASAP or instrument a flexible level IE middleware over the sink. Highly modular development is a future angle. The SDK / runtime of my efforts is extremely modular and very granular. As things go the current SDK / runtime is composed of 700 modules w/ minimal dependencies between them. One can easily put together a minimum module set to provide a purpose driven SDK / runtime thereby only exposing the necessary APIs to get the job done. This allows through vetting from security issues to minimizing the “house of cards”. Building an ADK project on top of Android to control say a mechanical apparatus; well you certainly don’t need a glut of 2D / 3D graphics modules suited for game development. Minimizing the SDK / runtime configuration makes it much easier for developers to focus on the APIs that accomplish specific goals.

    Ideally where my efforts are going is to separate middleware modules / runtime on top of Android (or any Java environment) that can be updated without flashing firmware to add new SDK functionality; this is the main culprit of real fragmentation w/ Android (IE the serious bugs across OS versions that never get updated on all devices). Admittedly things are better, but I can elaborate on some bad core Java implementation aspects _still_ in Android 4.2; cough horrible annotation performance.

    Device computing:
    Ah, yes, another benefit to a modular SDK / runtime that can be configured appropriately for the device and task at hand. I suppose you could say my efforts create a meta-SDK / runtime that runs on any minimally Java compatible environment. From the get go for instance I’m excited to create specialized audio hardware running Android w/ my middleware efforts. Folks creating custom devices running Android likely will want to expose new SDK / API features without updating firmware. Will get back there soon when my middleware efforts are out in the wild.

    Regarding domain specialization this is nothing new. Consider game / graphics engines that abstract the vagaries of OpenGL/ES making things much easier for developers who are not as seasoned. There is no reason a modular framework can not encapsulate any domain specialization making it much easier for mere mortal developers to handle complex tasks.

    Democratized computing:
    Well, all of the above somewhat begins to answer this question.. Are new frameworks needed in existing languages; most definitely, however any framework still needs to be configured at the very least. User (mere mortal developer) friendly tooling that integrates w/ standard IDEs is the key and where I’m headed before launch of my efforts. In fact it’s necessary if anyone is to adopt it. One can create the best framework in the world, but adoptance is greatly aided through ease of use. Ease of use does not stop at configuration, but continues through a suite of high level tooling that makes it easier for developers and non-developers alike to complete useful tasks. I can expound, but post is long already..

    I suppose one can chat a bit more on all of the above, but hopefully that starts a conversation. I’d be glad to get in touch on or offline. Again I like your post as it validates the journey I’ve embarked on and hopefully soon will share with all.

  • Riyadh Al Nur

    The problem today is that there are so many options, it is hard to choose one. Every language has its flaws and upsides. A programmer is no longer subjected to learning one single language. They have to learn a few to keep up. If you are web developer, it is not enough anymore to just know HTML,CSS and JavaScript. You need to learn PHP, MySQL and the list goes on. Yes, we now have learning resources at our fingertips but there is only so much a human brain can consume. The human brain like anything lese can be compared to a hard disk drive and like it, to put in new information, old one have to removed but in reality it is not as easy as clicking delete and removing the junk or old information.

    • I think programmer need to learn and understand programming idioms, not languages or particular patterns. For example, today we are implementing powerful ideas that comes from Smalltalk (lets look at Node.js or MagLev from this point of view). If you know some basic principles of Smalltalk at some degree, than you able to understand most of *new* features very quickly.

      • jimmd

        Yes, while it is important to understand the logic, the process of translating those concepts from language to language or framework to framework is time consuming. That is my biggest complaint about the plethora of languages and frameworks out there today. I know what I want to do, but figuring out the syntax and idiosyncrasies on how to do it wastes a lot of time.

      • JohnB

        I agree with this, but a problem is that employers are often looking for specific skills rather than people who are generally good.

    • Nyein Aung

      Your idea is right, I am also getting stuck with which programming languages to choose as a professional one. Application programming can be considered as a developed one, but I find web programming still needs to have some developements. At school, I have to learn both application and web programming, so that makes me really time cosuming.

  • Matej Kozlovský

    Yesterday I watched interesting video and some principles describe in it should be used in future development of programing.
    For example we should not wait for better languages that will solve every problem we have in programing. We need better IDEs(environment) because we cant efficiently express our ideas when this ideas are twisted by insufficiently design tools for translation our thought into line of code. Precisely like we cant translate one language in detail to another we cant express every idea into source code.
    We are in desperate need for IDE that are not only good text editors with built in dictionaries but also learning environment witch can adequately describe how our programs behave. It should not just tell us that error occurred but also why. It should use some of ideas from “ladder of abstraction” to describe how data in ours programs behave by influence of every line of code.
    I hope that this tools come soon.

  • An interesting trend is tracking or being able to better discuss an organizations ability to deliver modular code / software that can be deployed in a variety of environments. There seemingly will be more ubiquity on the server / cloud / big data side compared to the client / end device which may be considerably different. I think the Modularity Maturity Model initially discussed by Dr Graham Charters from IBM at an OSGi event a little over a year ago is interesting food for thought. It’s not complete or 100% definitive, but it’s an interesting start for discussion.

    With my efforts I’m at level 4 and attempting to get to level 5 before launch. Integration w/ OSGi or another module system that supports lifecycle / hot swapping of modules, etc. is what level 6 requires. As things go I want to plan to at least support multiple lifecycle systems, so not just OSGi.

    In regard to Matej Kozlovsky’s comment below on the need for better IDE environments. I agree, but I think that as long as an IDE provides a plugin / module system it is up to the SDK providers to provide better plugins that improve the ability to realize ideas and visualize the runtime of the system at hand.

    One thing that is exciting to me at least given my efforts are modular via a component oriented API at the runtime is that it can be instrumented without being overly invasive thus it will be possible to create a GUI of some sort visualize the runtime environment that may provide insight into what is happening other than perusing log files and / or receiving exceptional conditions w/ a stack trace.

  • lubabum

    Very interesting post.

    It is also a subject I’m passionate about and as a systems engineer and university professor worries me. In class one perceives as growing systems students have to learn many more languages ​​and tools used to make make more modern development (ie WWEB).
    I have read all the comments, yet I do not see any of the tools generating applications. On tools generating applications is that I wish to draw your attention. I think the future of programming will be more focused on tools to help understand and model the problem, programming in any language is a recurring task and we know we know that systems automate repetitive tasks. So programming is automatable and therefore I came dandomen that the issue of development of sotware this is to make the best analysis possible. Therefore we need analysts and programmers. In this regard I would like to give him a look at a tool called GENEXUS, an application generator that I see is the type of tool and IDE that many are willing to take to make their developments, tool with which you choose the language (Java, C #, Ruby) without programming language and database (Oracle, SQLServer, MySQL etc) without knowing the database and using a set of rules, logic and objects of the tool (A KB, knowledge base), GeneXus generates the database and programs. Moreover you can include in your knowledge base programming for mobile devices (blackberry, iphon, android).

    The invitation is not only to give him a look at the tool but to keep in mind that the future of programming is not in programming but inthe analysis, which means we need more analysts to programmers.

    If DeSena know more write me experiences

  • >>Object orientation, is generally hostile to data. Its focus on behavior wraps up data in access methods, and wraps up collections of data even more tightly. In the mathematical world, data just is, it has no behavior<<

    I disagree. Wrapping is derived from defining. Even in real world you need definitions. What you may mean is that the used definitions (OOP) are not loose enough, but my experience with programming shows me that leaving stuff in the logic without a really strict definition is a scenario that will always lead to errors in the end product. The trick would be to provide a definition that is at once strict enough and flexible enough- like the waves and the quasi-stable energy clusters(particles) that relate to those.

    In the scope of your article I think that what we need is to transform the PC to a digital organism that extends our brains. The exponential increase of data(as amount and as types) bugs us. Our world is getting more and more digital/smart, but the more smarter our devices get the more stupid we get. This happens because all programming and development in general is done in 'black boxes' – you need some knowledge to step up and to use the complete functionality. This can't( ;) ) be avoided when talking about different areas of science but what about our devices- So many platforms so many incompatible interfaces(HW and SW)..chaos!!!!

    For me the PC needs to manage that stuff on its own in some way, the controlling logic is neither subject to CPU nor to a programmer. The first resides an engine(think of a car) and the second a designer (to place the steering wheel..). The driver that actually chooses the path the speed, the stops, even the features(when buying a car) is the user. If we stay with the car as metaphor, you do not buy 10 cars because each one is perfect in one thing(Visual Studio, Photoshop, Cinema 4D, Catia…etc) and then switch between those when you need. You have one car maybe two, and those are mainly different in the hardware(faster engine, more space, off-road, fuel consumption) and not in the function (move you and your luggage from A to B in acceptable time span given the length of our lives).

    The efforts in mechanical engineering over the past centuries must show us where the SW engineering must go(from user point of view). The OS is not a car in this context, it is a platform where developers can build cars for the users to drive on the network(Data, Global, Device – even in the scope of single PC). This has to change. Given the amounts of options available we just can not afford to be outsmarted by our own tech, not because it is smarter than us, but because we are developing it in such way that a Tsunami from different user patterns emerges along with the actual problem we try to solve.

    Looking from the point of a beginner (for me always the right point to evaluate a tech) we are forced (no way to unify those) to specialize in particular tech, while the places where technology(in general) is applicable grow from physical point of view(more devices, more functions), but the possible applications of our tech(we specialized in) grows down – not outdated but modified to the point were changing the tech is better and easier. When I started doing programming there was a saying that a programmer is productive to 30 years- totally true given the fact that they were doing assembly back then. Currently I am 30, still productive and capable, but I face the same dilemma because of the web.
    I actually do not try to catch up with the new techs. The fellows in the IT industry are so fast in deploying new technologies that it just isn't worth it to go upwards- it is like a sandstorm :) money blows and more sand particles are getting blown up in the air.

  • Tihomir Stoev

    Wrong account for login :( should add delete for posts.

  • In the controls space we have specialized programming environements to let engineers write programs for what amounts to specialized devices. scada and plc development is an example. the issues are of course to make things simple you have to simplify choices and sacrifice flexibility. to make things safe, same limitations also. lets dont make those mistakes for other devices.

  • Liftoph

    First, you must define what is a programmer. And when you do, remember the long tail.

  • JCW

    I take issue with “What’s more, it’s not the programmer’s fault.” in the section on Democratized computing. If it has become part of their job, then it *is* their responsibility to learn how to do it well, or select a new line of work. Sidenote, although I understand where the article is going, I was writing microcode for a standalone device 20 years ago, so it isn’t that revolutionary to observe that not all programs are written for PCs.

  • Ross


    — Highly/massively multi-core programming (e.g. compiler design, concurrency mgmt, debug & test methods and tools. We’re going to need to “get” this.

    — Ground up/wholesale re-do of Von Neumann architecture. Something based on Petri Nets, possibly?

    — Mesh computing (Under your Distributed Computing heading)

    — Adaptive computing (Think: ASIC/FPGA “Frankensteined” w/existing systems. How to manage?)

    — “Stack” of languages of increasing abstraction. Not a ‘ladder’ or a ‘tree’, but rather a “web”. How to traverse this web optimally? (Human language is a model of this, from raw vocalizations to abstruse semantics — we need it all to work together.) Underlying theory based on something like Stilman’s Linguistic Geometry?

    What a great set of topics you’ve laid out. Can’t wait to see where you go with it…

  • Adrian

    Dangerous Computing:
    Quite correct, we need a completely new paradigm for doing irregular parallel computing. Data parallel has been around for some time and is ably covered by libraries like Intel Concurrent Collections but non-trivial irregular parallel computing is of a different order. I believe the answer is Composable Autonomous Agents. The first step is the separation of the infrastructure of the solution (what is connected to what) from the operations (transforms operating on data). The latter is well served by serial programming languages but the former will require a completely new tool. Text is too serial to serve, only a graphical tool will serve.

  • Nyein Aung

    It is best idea to manage complexity by using tools and frameworks. However, I am worry that programmers will be fired in the future if special programming tools become evolved to complete a single project with only mouse clicks. E.g.We can create a database connection by giving only the name of the database without hard-coding the creation.

  • We are in the midst of an era of programming I’ve dreamed about for more than 20 years. I consider myself a pragmatic programmer. I program to solve problems in a tiny operation where if it can’t get done in less than 5 hours, its not going to happen. The scope of problems I could solve 10 years ago was very small. Every project required too much time on non-value aspects of development – by which I mean housekeeping tasks that must be done and done right, but generally don’t do anything to solve the original problem – e.g. authentication/authorization/data layer/ui forms/api, etc.

    Today its not 5 hours, but literally minutes, to wire up data from multiple systems, or to create a quick object+ui, or to extend a system with a powerful new tool. One of my favorite recent examples didn’t even require programming at all – our marketing team wanted to enable ‘chat’ on the landing page. Literally 5 minutes later we had olark in place with a pumpkin skin to boot and a live connection to staff smart phones. The same goes for integrating with Freshbooks, Salesforce, Runmyprocess, Mailchimp, Xero, etc., etc.

    I teach entry-level application development, and while my emphasis remains on fundamental programming and data concepts, the context is integration. I believe we are at the very very beginning of 10 years of integration focus as an industry.

    Unless you run a monolithic ERP, and even then you probably don’t have ‘all’ the modules, we all live in operations that can’t answer basic business performance questions because the data is not integrated. That is all about to end as we wire up these previously isolated process islands.

    The best part for developers as a workforce – only a small fraction of these integrations will be able to be ‘standard’. The rest will need to be custom built for each situation – the T in ETL is where each operation will gain its competitive advantage. With E and L so easy, we can spend our time and budget on what really solves the problem.

    Per the tools question – I am using and teaching a platform called itduzzit to build these integrations in the cloud.

  • Ken McNamara

    There’s two separate issues here – programming and structure.

    Good programming is about organizing structure around data and code.

    Strip away all the magic and programming is about finding two pieces of data, comparing them with the correct piece of code and making a decision.

    Certainly, logic and order are vitally important to a programmer – but I’d maintain that organization of code and data into a structure that enables a programmer to locate code or data is equally if not more important. To be more precise ‘…in a structure that enables a programmer to locate the EXACT code or data…’.

    I can’t comment on ‘most programmers’ but in my own experience my beginning efforts at coding leaned heavily on the power of the system to make up for my poor structure. Of course, in the long run that spaghetti code (house of cards) was abandoned for a more organized structure – which could be extended and maintained.

    I guess most accidental programmers either learn this the hard way or abandon their code.

    But while I came to understand and work in my own structure – I never really understood the supporting structure (OS and network above me). At least not to any substantial level of expertise.

    I’d suggest thinking about structure with home construction as a model (I’m sure this is not a new idea).

    Maybe there are a few people out there who understand all the systems that go into a modern home – and know how they fit together and interact. And not just active systems – but passive systems such as roofs, foundations, walls and windows.

    But there aren’t many.

    I’m pretty sure that this is the future of programming – programmers who learn coding and structure – and work within structures they depend on – but don’t fully understand.

    I’m equally sure that structure is the missing component in many programmer’s toolkits.

    I’d suggest that structure and teaching structure is an area that needs attention.

    • Nice post…

      I can’t agree more that structure in general needs attention from the pedagogical angle and beyond. Back in the tail end of the 90’s (’96 / ’97) during the first couple years of my CS undergrad I bemoaned to professors that seemingly unwashed students would benefit greatly if there was a class early on in the first year that just focused at the time on GoF design patterns and the relation to structure and building a vocabulary. As things go one was left to self-educate beyond the bare minimum on these matters for the most part.

      In fact structure is central to my framework / platform efforts at many levels from core architecture to modularization to tooling and the extensive tutorials I mean to provide teaching the concepts at hand. Since I’m going to ask developers to make a switch in the way they think about and structure their code the bar is rather high for an initial release as sweating the details ahead of time is essential. In a sense I’ll be selling structure w/ the aim to get results quicker and minimize technical debt as requirements change or an app evolves.

      Your example of a house and the variability of what potentially defines one is a good example of where traditional OOP structure (inheritance / getters / setters / mixing data & behavior) falls apart. It is core to the entity system dilemma.

      An emphasis on most efficient structure is the basis for low and high level data oriented design. You can swap “Good programming” w/ “Data oriented design” in one of your leading quotes, “Good programming is about organizing structure around data and code.”

      I quibble slightly on your follow up paragraph insofar that high level data oriented design is geared toward assisting the developer in finding the exact data / logic with the other half geared towards efficiency, however low level DOD is geared towards absolute efficiency for the low level architecture at hand and this may lead to more obtuse structuring of data and code / logic.

  • Duncan Cragg

    I was excited to read this analysis because of its close correlation to the drivers behind my own project, Cyrus. Cyrus is a programming language and environment.

    Distributed: Concurrency and distribution are largely transparent in Cyrus. It has self-driven data structures called ‘items’ which have their own thread and are responsible for their own state. They are published on a URL.

    Device: Cyrus is primarily targetted at Android for the client side. Cyrus sees programming as ‘reality extension’ – blurring the boundary between real and virtual. You can generate 3D on Android pretty easily; on the way to augmented reality.

    Data: Cyrus is declarative – which means it’s data oriented by nature. I call its programming model Functional Observer: an item sets its state as a function of other observed items’ states.

    Democratised: a primary driver of Cyrus is that it should empower non-programmers to create their own virtual stuff and animate it with simple rewrite rules. I’m trying it out on my 11-year old kid!

    Dangerous: Cyrus has such a simplified view of the world that it can even be collapsed down into an operating system! An earlier prototype of Cyrus was actually implemented as a hack into the Linux kernel. By ‘simplified’ I mean that Cyrus itself doesn’t explicitly talk about low level stuff like networking, persistence, GUIs, threads, etc.

    My blog has more on Cyrus:

    • Good luck with Cyrus; hopefully I’ll have a bit of time to take a deeper look. I am curious though on your statement about Cyrus being declarative and thereby being “data oriented by nature”. Your description of “Functional Observer” seems like it has more to do with being data driven design vs data oriented design. The latter from a high level perspective regards explicitly separating data from logic / behavior and from a low level perspective orienting data around most efficient processing for the architecture / task at hand.

      • What I meant was that Declarative approaches treat data structures as first class and core elements – the processing terms depend on a specific data structure: Prolog’s assertions, Lisp’s lists, SQL’s tables, XQuery’s XML, Make’s files, spreadsheet formulae’s cells, CSS’s DOM. Often the processing terms are expressed or expressable in the same data structure on which they operate.

  • Armin Bachmann

    The programming paradigm needs to change. I would like to invite you to try out GeneXus. ,

  • Fantastic article, thanks so much for writing this.

    We are working on the issues that arise for developers who now have to manage multiplexed asynchronous and synchronous messages. In this paper (we’ve got a few more coming out soon on this topic), we apply the concept of PolySocial Reality (PoSR), the way that people are using devices and communication ( to explain what programming will need to do to accommodate what will be coming with the highly heterogenous nature of multiple devices, multiple users, all multiplexing within multiple environments:

    PolySocial Reality: Prospects for Extending User Capabilities Beyond Mixed, Dual and Blended Reality

    More papers on this topic at:

  • Well written article . Pointed crucial parts of next generation programming requirement. I think with in next 50 years ….millions of access terminal will appear and active like tea stalls in south asia. those terminal will be touch screen enabled and voice command enabled . in those case we need to think about newer programming model and platform .

  • Georgios Gkekas

    I like at most the last point. It has become clearer in the last decade that abstractions, although good for producing understandable and more maintanable systems, pose a big performance penalty on the produced code. The worst is even that once software engineers realized the problem, tried to mitigate it by creating new technologies, trends or patterns (like functional programming and dynamic languages) only and only to discover again that they were hunting their own tails because these technologies actually existed there since many decades before the rising of OOP. Therefore, I also strongly agree that we need a swift in the way we thing of programming and system architectures.