• Print

The Army, the Web, and the Case for Intentional Emergence

Lt. Gen. Sorenson, Army CIO, at Web 2.0 Summit

I didn’t make it to the Web 2.0 Summit in San Francisco in November last year so I didn’t get to see Army CIO Gen Sorenson present this Higher Order Bit talk in person. However, I thought it was cool that the Army made the agenda and luckily someone posted the video. I finally got a chance to go through it. If you didn’t see the talk, or don’t have the 20′ish minutes to watch it now, here’s a rough summary:

- Because of security and related concerns, it takes a very long time for the Army to take advantage of new generations of technology. We tend to deploy it widely about the time it’s becoming obsolete.

- However, we are now beginning to take some advantage of Web 2.0 technologies in, for example, Stryker Brigade collaboration, battle command information sharing, and command and control.

I don’t think that slow technology adoption is caused by fundamental first principles, so I don’t think it has to remain true. But that’s a long discussion for another time. In this post I’d like to focus on Army Battle Command, Web 2.0 and Gen Sorenson’s connecting the two. Specifically I’d like to talk about lost opportunity and how the same technologies can constitute a generative platform in one setting and window dressing on a temple to determinism in another.

The lost opportunity I’m thinking of isn’t whether Army Battle Command is Web 2.0 enough or not. It’s that enterprises tend to see web technologies as an add on to whatever they already have. Plus, they tend to focus on specific technologies rather than the combination of technology, process and policy that make a collection of technologies viable as a generative platform. “Let’s add some Web 2.0 to this system; we’ll use REST instead of SOAP.” But the fundamental question that the web answers isn’t whether REST is better than SOAP, but whether emergence is more likely to create innovation than enterprise planning, and the answer to that question is yes.

General Sorenson says in the video that “CPOF brings in Web 2.0 capability, chat, video, etc…” and then comments on “graphics, chat, use of tools…” and stuff like that to reinforce the idea that Command Post of the Future (CPOF) and the Battle Command suite it is part of has Web 2.0 attributes. Like many enterprise technologists, General Sorenson appears to be focusing on rich user experience and collaboration as the attributes that give CPOF a Web 2.0 imprimatur. While that’s not unexpected, I think it leaves most of the benefits on the table and untapped.

truncated-tail.png

Putting aside for the moment that CPOF isn’t primarily delivered through a browser, a first step toward webness, the reality is that CPOF and other systems like it neither leverage accessible platforms nor contribute to them. It is a standalone (though distributed) computing system with gee whiz collaboration and VoIP. And while it offers some enterprise-style data services, it has none of the features of a generative platform. If I’m in the field I can’t readily extend it or build on it to solve different problems, modify its proprietary underpinnings to suit my local needs, or quickly incorporate its information into other applications. If an important aspect of Web 2.0 is enabling the long tail, then this isn’t Web 2.0.

I should say, this isn’t a post about web 2.0 semantics. However, it’s important to understand that the web’s power derives from its evolution as a platform. Otherwise it’s hard to see what is being missed by the military’s IT enterprise (and many other large enterprises).

From the beginning the web has been generative. It wasn’t CompuServe. With some basic skills you could add to it, change it, extend it, etc. Jonathan Zittrain, in his excellent book The Future of the Internet – and How to Stop It, reflects on why the Internet has experienced such explosive innovation. He argues that it’s the powerful combination of user-programmable personal computers, ubiquitous networking with the IP protocol, and open platforms. Today, the emergence of open source infrastructure, ubiquitous and cheap hosting for LAMP-based sites, open API’s, and the intentional harnessing of crowd wisdom has ushered in the web 2.0 era. It’s an era of high-velocity low-cost idea trying that leverages the web itself as the platform for building world changing ideas and businesses.

The Internet hosts innovation like it does because it is an unconstrained complex system where complex patterns can grow out of easy to assemble simple things. Simple things are not only permitted, but they are encouraged, facilitated, and often can be funded with a credit card.

I’ve subscribed to the notion of Gall’s Law for longer than I knew it was a law:

“A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

The upshot of Gall’s law isn’t directly stated, but it’s important. To get complex things, you have to be able to do simple things. It turns out that enabling simple things is what the web as a platform does exceptionally well and that enterprise systems like Army Battle Command don’t do well at all. In fact, the DoD enterprise tends to prohibit simple things through an unintended combination of policy, practice, acquisition rules, and organizational inertia.

In an environment that does permit or encourage simple things, many of them will go nowhere, but without those failures, complex successful things are impossible. So, if you want your enterprise to be innovative like the web, you have to resist the Soviet urge to five year plan everything, and instead make at least some room to facilitate emergence. If you agree, then the goal of your planning should be to plan the development of infrastructure, policy, and practices that support the emergence of things you haven’t thought of yet.

A while back I met Carlos Castillo and Paul Lin through this blog. I had written a post touching on some similar themes to this one and they commented about their experiences developing systems while in the Army. Later we got together and they told me the story of a system that they built, against all odds, while they were deployed with the California National Guard to Iraq. Called Combat Operations Interactive Network (COIN), it was a simple but very effective set of web applications that streamlined operational administrivia for the soldiers in their unit. Things like crew and equipment manifests for patrols, the status of routes, etc. Stuff that until then was being done in Excel spreadsheets and lots of running back and forth with USB thumb drives.

It was a great story of how, with sheer determination, two guys in the field took on layers of bureaucracy and after six months got a single Linux box authorized on the secure network. Then, using skills they brought with them from their civilian careers, they launched COIN within days.

The really interesting thing from my perspective was that once that box was on the network, it’s value as a generative node was obvious and they rapidly extended COIN to scratch all kinds of other itches. People were lining up to ask for enhancements and new apps. Simple as it was, it had become a platform, and it was valuable in ways that the CompuServe-like Battle Command couldn’t be. Within a short time elements of COIN were being used in General Petraeus’ daily situation briefings and it had become a fixture in command centers across the theater.

If one Linux box can enable that much emergent innovation, imagine what java-scripting (or Ruby wielding) sergeants could do with EC2, some simple application and mapping frameworks, and a lightweight REST-based API strategy implemented across the rest of the Army’s Battle Command suite.

I went to the Pentagon with Carlos and Paul and had the opportunity to introduce them to General Sorenson through members of his staff. It was great to see them get recognition for what they had achieved. Most people in their shoes would have faced that mighty bureaucracy and fled the field. However, today COIN is languishing because they’ve rotated home. Once again they are struggling with the bureaucracy to get back over there and further extend and maintain it.

I was thinking about all this as I watched General Sorenson’s video. I’d love to see the Army push less for the development of specific end solutions like CPOF and work instead to build general infrastructure to support emergent innovation. Traditional requirements documents are low band pass filters at best. They are too time and distance displaced from users, but emergent processes close to the problem can achieve high-fidelity solutions quickly. To take advantage of this Army Battle Command should expand its mission to build platform components and content API’s and consider themselves one part enterprise system, one part platform, and one part edge development facilitator.

The over-arching goal would be to bring, train, and develop more Carlos’ and Paul’s in forward-deployed units and then make sure they have what they need to innovate rapidly and in situ. I would expect to see a flurry of simple but valuable things emerge. Some of them might just grow up to be the next really important big complex thing. Plus, just think how cool it would be to have an enterprise metric called “number of serendipitous emergent solutions per brigade.”

tags: , , , , ,
  • http://joshuahoover.com/ Joshua Hoover

    Good post Jim. I especially like the story about Carlos and Paul creating COIN.

    I love Linus Torvalds quote related to this topic (Gall’s Law, really, just told in a way that Linus is known for):

    “Nobody should start to undertake a large project. You start with a small _trivial_ project, and you should never expect it to get large. If you do, you’ll just over-design and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don’t think about some big picture and fancy design. If it doesn’t solve some fairly immediate need, it’s almost certainly over-designed. And don’t expect people to jump in and help you. That’s not how these things work. You need to get something half-way _useful_ first, and then others will say “hey, that _almost_ works for me”, and they’ll get involved in the project.”

  • http://basiscraft.com Thomas Lord

    Some observations:

    The concept of “intentional emergence” applies beyond the web. It is a general feature of “open source” economics. Jon Robb writes about this all the time.

    In a “bazaar”-style economy with open source flow of information (product designs, techniques, trade opportunities) the market naturally does just what you say: fosters lots of entrepreneurial investment (economic experiments) and then (thanks to the “open source” of information and decentralized control of a bazaar) quickly doubles down on investments that work well.

    For example, that’s (one way to explain) why there were so many newsstands, all with roughly the same business model at one time. It explains how cafes became popular. It explains how “Make” magazine and a few others managed to sell a lot of steel pipes and Kee Klamps(tm) a while back. The bazaar is a natural hotbed for entrepreneurs and winning ideas spread quickly.

    If open source economics lead to “emergence”, how do we get to “intentional”? Well, Jon Robb again:

    Our enemies provide a fine example of creating intentional emergence: Their strategic goals include helping to cause the failures of states and, where states are failing, filling an economic void that hits the people. The “black market” (from our perspective) becomes the primary market, out-competing the state to be the primary, trusted provider of essential services and the market-maker for essential trade. Loyalties shift towards the new power structure. Within that context, substantial amounts of production are redirected towards weakening state power in neighboring regions and expanding.

    Perhaps the most famous example of this is the IMD trade in Iraq: systems of small “garage factories” building and assembling the components of IMDs with the “know-how” spreading rapidly in open source fashion.

    For the leadership of the enemy this is a convenient and intended outcome. States develop military capability very indirectly: They host a largely centralized infrastructure and provide government services. They approve and collect taxes. They form long term plans, adopt weapons specifications, collect bids, and administer programs. The enemy, instead, spreads the general idea of collapsing the state and defending the new order – then aggressively pays entrepreneurs whose innovation turns out to be useful. The enemy has less infrastructure to defend; they don’t need taxes, they collect the rent from trade monopolies and in the form of the entrepreneurial speculative investment of others.

    An essential characteristic of the “open source insurgency” is that it’s intentional emergence of innovation is fast and adaptive. Another essential characteristic is that it is hard (at best) to identify any “center” to attack to remove that adaptive, deadly capability.

    An essential characteristic of the (developed) nation states of the world, currently, is that they rely on infrastructures that appear to be highly vulnerable to “cascade failures” brought on (what Robb calls) “Systempunkt attacks”. A concentrated series of “small insults” to infrastructure can produce non-linear results: failures much larger in scope than any one attack. (Familiar examples from “attacks” by nature would include the Great San Francisco Fire after the quake or the US’s experience with massive-scale black-outs. Enough small pieces of the fire-fighting infrastructure go out and suddenly a handful of fires can grow to take down the whole city; particular points on the power grid go out and take much more of the grid with them.)

    The question arises, and is raised by Jim’s post: Are the tactics of open source, bazaar-driven insurgency superior to or inferior to the more top-down, centralized tactics of, say, the US forces?

    It’s not a black and white question. The US forces display some elements of de facto open source tactics themselves. Famous examples include the rapid spread in Iraq of techniques for improvised armoring of humvees and the use of “silly string” to search for trip-wires. I have the impression from various news reports that those members of the forces most directly involved in citizen contact in Iraq (e.g., in relation to employment or information gathering) also use bazaar-like methods to experiment and spread ideas about what works and what doesn’t.

    Still, the US armed forces (and developed state forces generally) are mostly “top down”, Cathedrals to the enemy’s Bazaar. So, which is better? Or is one better? In other words, do we as a society need to adapt to using open source / bazaar tactics ourselves, in our defense, or have we a superior methodology in our cathedral tactics? And we can acknowledge up front that the right answer (as know from improvised armor on humvees and silly string) — the right answer is probably “a little from column A and a little from column B”.

    That’s the question Jim raises and its a good question and, afaict, it’s a question the forces have been wrestling with earnestly for the past several years.

    Jim, where you lose me is with your certainty and faith in certain specific technologies – and your citing COIN as an example (at least as you tell the story of COIN):

    COIN sounds to me like a problem. Let’s start with that. Per your account, with much work, this single Linux box was admitted into the theater for this skunkworks project. It began as a locally used labor-saving device. It acquired uses left and right, quickly. It wound up becoming critical infrastructure.

    The loss of that linux box, or the gentlemen who ran it, would thus amount to the loss of critical and difficult to replace infrastructure. And yet, two guys and a box are highly vulnerable to loss. You know all the old jokes about “What if Linus is hit by a bus?” The analogous question is not so much of a joke in this context.

    So, COIN gives us no good evidence at all that a LAMP stack and browser hacks are good “general purpose kit” to take into the field. In this case, at least per your description, the outcome was the introduction of a significant new vulnerability!

    Notice the important difference here between our enemy and us: In our enemies Bazaar, the low-tech, wide distribution of entrepreneurial effort ensures that there is no “center” to attack – no single point of failure. With COIN, the outcome was the opposite: a single point of failure – and a vulnerable one, at that – was created.

    So it is too in the civilian economy around Web 2.0. Consider, for example, how Google has become a vulnerable, single point of failure as regards user privacy. Or consider the vulnerability of Facebook to the possibility of ceasing operations compared to the sum of investments being made in content there by the users.

    We have in Web 2.0, as it stands, “emergence” all right – we sure do. But emergence of what? Generally: the simultaneous emergence of new features and new single points of failure – new, centralized weak points in the essential infrastructure of the state-based society.

    The Lt. General in the talk showed the “S-curves” of technology creation and the lagging S-curves of deployment in the field. I think, Jim, if you look deeper into that you’ll see its because the forces are very, very wary of creating unintended surprises such as the discovery (the hard way) of previously recognized single points of failure.

    “Computing” and “communications” are deeply intertwined. Modern armies certainly benefit from ever-more sophisticated and, indeed, improvised means of managing those things. I think that’s beyond dispute – it’s common sense.

    You can’t get from there to “Use the web 2.0 products more!” for free. There’s no good evidence for it. Use EC2? No, really, that’s your suggestion? Hello Dresden.

    Industry can certainly help the forces find the right balance in Bazaar-like, improvisational capability with entrepreneurial opportunities but, to do so, it should be developing far less centralized technology – far more robust at every layer of the stack. I don’t just mean “kill all the bugs”: I mean tools that are simple and reliable to the core of their design. Things that “just work” and that don’t require an elaborate, centrally managed infrastructure. Things made up of fungible components that can be reliably supplied and maintained in the field. That tools such as this would also make the public sector of society more robust is all of the more reason to go in that direction.

    And I can’t resist pointing out that the “Flower” system I sometimes try to interest people in – a P2P, decentralized system of building web de-centralized services – is at least schematically the *kind* of thing needed here.

    -t

  • http://blog.gardeviance.org Simon Wardley

    I absolutely love this one particular line of Thomas’ – “Things made up of fungible components”

    The process of commoditisation (the shift from innovation to custom to products to services) is all about the increase in the ubiquity and certainty of an activity. It’s all about the shift from the novel and new to standard components, and in this case the provision of standard internet services as components.

    Whilst commoditisation allows for further innovation (as per the process of creative destruction), it goes much further.

    Componentisation (caused through commoditisation) is an accelerator of evolution as the rate of evolution of any system is directly related to the organisation of its subsystems (Herbert Simon’s proof in the Theory of Hierarchy, 1960-ish).

    So as we shift towards ever more “fungible components” of the computing stack provided as internet services (whether data services, applications, platforms or infrastructure), the speed of innovation and the agility of businesses will continue to accelerate.

    Things are just going to get a whole lost faster, but then we’ve been quietly accelerating for a very long time. These processes have been occurring to all manner of activities.

    Gail’s law has its root in cybernetic management and it is in essence the application of Herbert Simon’s theory. For any complex system to evolve and adapt on any reasonable time scale, it has to be built from lower order and simpler subsystems.

    However, this is not new. We’ve been building platforms comprising of lower order subsystems to enable further innovation and agility for hundreds of years in many different fields. In architecture for example.

    Whilst such ideas are relatively new (last thirty years or so) to parts of the IT, the process of building a platform itself never stops.

    The reason is that idea of a platform for emerging innovation touches upon Salamon & Storey’s innovation paradox – the need for order & co-ordination to survive today (efficiency, adaptability etc) and the need for disorder in order to survive tomorrow (innovation).

    Balancing this paradox is one of the greatest challenges of management. This paradox is the reason why Google’s 20% rule has been so effective. It’s the reason why there are no single management or project methodologies which work everywhere. It’s the reason why attempts to apply best practices throughout an entire organisation tends to kill it off.

    The management of the paradox requires balance. It requires a bit of the “bazaar” and a bit of the “cathedral” and to make it worse it doesn’t stand still.

    Any organisation is a mass of activities (and actors) and those activities alone are in constant movement from innovation to commodity. New activities are constantly appearing and we can’t stand still because any business exists in an ecosystem which is hence evolving.

    As per the Red Queen, we have to keep running just in order to stand still. Commoditisation makes sure of that, componentisation just accelerates our necessary velocity.

    Hence we are forced to constantly update any platform and to re-balance this paradox, it’s never ending.

    So Thomas is right to point that it’s neither this (the bazaar) nor that (the cathedral), but both.

    Unfortunately the “right answer” and the right balance is constantly changing.

    Great post. Excellent comment Thomas.

  • http://kevincurryblogspot.com Kevin Curry

    Jim,

    You have so squarely nailed it that I don’t know where to begin.

    I do think we can spare ourselves further wrestling with the metaphysics at work here; culture, politics, philosophy.

    Let’s talk about some practical things that can be done to emulate the heroes of your story:

    Drive implementation to publish data in formats that are simultaneously human and machine readable: XHTML, XML, JSON, CSV.

    Great if it’s a mandate. I think it’s time for a mandate, but this can happen with our without a policy from on-high by writing it into program requirements and solicitations for said programs. Or just decide to “Fix It!” Both top-down and bottom-up solutions tend to be complementary because they are based on the same strategy of better data flow vice “killer apps,” “small pieces loosely joined,” and ubiquitous technology standards.

    Here is just a small collection of taxonomies that Army (and DoD…and federal government) depend on. When I say “depend on” I mean spend a great deal of money managing on multiple levels across every domain:

    http://delicious.com/prestidigital/DoD+taxonomy

    These things are at the heart of many processes that cut across multiple domains; operational, technical, and programmatic. These common structures are easily found at an authoritative source and viewed by people (in public) as (closed) PDFs and (horribly unnavigable, malformed) HTML pages. Immediately I think, “these have to be in a database somewhere.” And in fact, they are in databases EVERYWHERE. I bear witness, as do my colleagues. Anyone and everyone who wants or needs to use one of these taxonomies is stuffing/replicating the data into Excel spreadsheets, Enterprise Architecture tools, Wikis, “Capabilities Portfolio Management” tools, and hundreds of other tight-couplings with (domain-specific, stove-piped, heavyweight) applications in order to support people who are performing similar, routine operational and programmatic functions. The same goes for lexicons. The “entire” FCS lexicon was (is?) stored in an Excel spreadsheet on the Lead System Integrator’s “Advanced Collaborative Environment” (ACE). I’m not going to get into how many things are wrong with that here. I know it’s wrong and I know how to make it right. (And I’m certainly not the only one. We’ve been working bottom up for years.)

    This anti-pattern of poor data portability and data redundancy is costly and causes functional gaps at many levels, across every domain. Yet the situation is simply and easily corrected, for example in the cases of these taxonomies and lexicons, by ALSO publishing them as well-formed Unicode markup, i.e., XML.

    Do this and I am certain intentional emergence and successful complexity will happen. Watson and Crick showed us that, right?

    Great post. Thanks.
    - Kevin Curry

  • http://kevincurryblogspot.com Kevin Curry

    Thomas,

    I wish I had read your comment sooner. I have to say that your insight is also beyond good. I am often reminded of several points you so eloquently made. Information security is one matter safety is another. “Drive-by-fieldings” are to be strictly avoided. It is one thing to conduct a feasibility assessment and develop an operational prototype. It is quite another to build and deploy a robust, reliable, fail-safe (or at least low-fail) capability.

    I can report from the bowels of the Cathedral that is a stinking mess in here.

    I also think there are several principles of Web 2.0 that can succeed in both the Cathedral and the Bazaar. Not everything the military does is a matter of life-and-death. Sometimes – most of the time – it is just process. The trade-off for flexibility is complexity, but indirect does not have to be inefficient. Complication caused by lack of data portability, for example, is an arbitrary and unnecessary burden. It can also be alleviated without getting complicated.

    I, for one, do sincerely appreciate and will use your comments as guided wisdom while seeking the balance.

    Cheers,
    Kevin Curry

  • Jim Stogdill

    Thomas, I appreciate your thoughtful comments. You always stretch my thinking with your responses. However, I strongly disagree with one bit. COIN is not a problem. COIN is a solution to a bunch of problems, and there is no danger in that.

    Their is risk in the fielding of almost any technology but the military often uses the “there are lives at stake!” argument and it slows fielding stuff that could be helping soldiers. The question is what value against what risk (and does the risk go up unacceptably with dependency?). And aren’t lives at stake because stuff isn’t available too?

    Some of this is a still lingering reaction to things like the Bradley acquisition debacle many years ago. And operational testing is as critical now as it was then; but not everything is a fighting vehicle that will put a crew at risk if it’s armor fails.

    If COIN fails as the result of “drive by” fielding, then those platoon sergeants will start running thumb drives back and forth again and patrol leaders will go back to picking up the phone to find out if MEDEVAC is available before they leave on patrol.

    When I was a submarine officer we still practiced “mental gymnastics” so that we could estimate a torpedo firing angle adequate for a quick shot in our heads in the event that our fire control computers failed. However, no one was arguing that we shouldn’t have a fire control computer because it could (and did) sometimes fail. It’s the same reason pilots wear knee boards even though the avionics are all digital. Know where you are dependent on your technology and have a backup plan, or be ready to invent one.

    Once you accept the argument that “we can’t be surprised because lives are at stake” then you start thinking “we better get all the requirements in here we are going to need before we spend all that money and time testing it.” From there it’s a hop skip and a jump to a 1000 page requirements document and your successor’s successor program manager getting to deliver it.

    Or put another way, I think it’s better to start somewhere and let simple things grow into complex things. Can you imagine if Bezos had refused to launch Amazon till he was sure he could deliver the 2009 version of the site and was positive he could get it through a holiday season? It would be a fascinating experiment to take the site and its infrastructure as it stands today and try to compile a single all encompassing requirements document for it, just to see how many pages it would require.

    If the pendulum were equidistant between the Fortress and the Bazaar I wouldn’t make the argument I’m making with such abandon, but where IT is involved in the military, it’s still almost all Fortress and no Bazaar. I’ll worry about people taking too much risk building on generative infrastructure in theater after that problem has emerged. Then we’ll have to find the balance between fortress and bazaar that you are right to point out is required.

    Plus, if the Army could provide infrastructure and facilitation for things like COIN, they would be on the reservation and managed rather than drive by one offs (I’m sort of ignoring the EC2 vs. Flower argument for the moment because anything is better than a stranded box in closet).

    There needs to a be a long tail and at the long end of the tail lots of value can be derived, but no will die if it breaks. It wasn’t there before anyway and there is always a piece of plexi and a grease pencil around somewhere if it’s needed.

    Regarding those S-curves, I agree, they are the way they are because people are trying to avoid surprises. But if you look back through our recent military history you’ll find that during periods of technological discontinuity the rate of experimentation was very high. Jet aircraft in the 60′s (the 100 series practically blurred past they turned over so quickly), armor in the period before and during WWII, air defense systems in the 50′s etc. If “NetCentricity” is the warfighting discontinuity it is claimed to be, then we should be in a period of rapid experimentation and development. Better to surprise ourselves with occasional failed experiments than be surprised by competitors.

    One more thought, I meant to work this into the post… there have been some successes over the years with grass roots generative development, innovations shorting to ground when enough need built up. If you are interested, this paper is a good read. It tells how Air Force programmer / fighter pilots built a mission planning system using early PC’s (i.e. generative platforms) that had been distributed to their fighter squadrons. The descendant of that emergent product is still in use and has outlived all the formal programs that were intended to replace it. Now looks as though it is going to be open sourced. http://www.mit.edu/~lindsayj/Projects/WarUponTheMap%20v30.pdf

    In the military, like anywhere else, given enough pain, innovation will short to ground. I’m just suggesting we facilitate by installing an anode.

    @Simon – great comment about Google’s 20%. The only thing I’d like to add is that along with 20% as a source of emergence, it gives participants a sense of empowerment, that people can contribute directly. I’d like to enable that same sense in the military by enabling a lot more Carlos’ and Paul’s and then on up the chain to small contractors, companies outside the complex, etc.

    @Kevin – I agree with you wholeheartedly about the data and data standards, but I see that as necessary but insufficient. The generative eco-system is more than just data, in fact it’s more than just technology.

  • http://basiscraft.com Thomas Lord

    Jim,

    I think we made a nice “set piece” here and I’m planning to (might change my mind) stand pat in this thread. I don’t see any obvious way to improve on the serendipitous nice set of back and forth between you, and me, and the other comment makers. I’ll be re-reading your reply-to-my-reply there several more times — it’s very helpful. Look forward to talking more in future threads. Thanks. We made something interesting here. The performative context is what leads me to stop, just here, just at this point.

    -t

  • redserpentl

    good article Tim.
    I was in the Air Force in the late 70′s our computer had cores we actually debugged from cow webs
    The DoD has traditionally rellied on Defense contractors like Hughes or Martin Marrietta to develop their highly niche systems, ours was a simulator for a Boeing B-52. DoD still dances with traditional lobbiysts of large defense contractors like General Dynamics to rely on their technological developments even now. I believe partly because the certification of a product by the DoD is so ardous and expensive and only recently because of 9/11there is a wider acceptance in creative circles of DoD needs.
    BUt you are right Tim on every point. Dod Needs to open up their bidding process to include more creative companies. On another note some of these Silicon Valley corps are owned by foreign interests and are proprietaries of the patents

  • http://kevincurryblogspot.com Kevin Curry

    LtG Sorenson referenced the “Google 20%” model explicitly when he kicked of the Army CIO G6 Innovation Lab (http://innovation.cio-g6.com) last week.

  • http://www.maya.com Chris Horn

    A quick reply – there are many who can describe this in much greater detail, but if you really look at CPOF you should see a system that:

    a) builds upon a simple concept storage repository that is the successor technology to relational databases (U-forms)
    b) allows users to generate powerful new data visualizations very quickly with much less technical skill than required by Web 2.0 technology
    c) allows users to effortlessly share those visualizations and re-use them between users and with new data
    d) was developed iteratively in a very short period of time in a process that was driven by both programmers & designers and military personnel

    Honestly, I really don’t see why you want “me too” Web 2.0 technology when you’re blessed with a truly discontinuous imrpovement that is unavailable (yet highly desired) by the civilian community.

  • Stephen Yaffe

    This is a very interesting article. I happen to have gone to Basic training and AIT with Paul and he sent me a link to the posting. Right now I am finishing up MIBOLC as a Second Lieutenant. As an Executive Officer in my previous assignment, I worked very hard to develop a database for my company which I then pushed up to the Battalion. Even a program as common place as Access was more than the Army can handle without the greatest extent of hand holding. The biggest problem I see with technology development and adoption is that the lowest level user is almost always moving on to another job. The time it takes to learn a new system ultimately is wasted if the user is no longer in a position to apply his education. Personally I am very interested in this COIN program. It is easy enough to see its value to the PLs and Platoon Sergeants who have need for a system that easily produces required/redundant paper work. I look forward to talking with Paul, and when I get to my next assignment, I will see if I can help push this ball up the hill. Thanks for the article.

  • Jim Stogdill

    @Chris

    I’m not exactly sure how to respond since any response I make runs the risk of becoming an orthogonal apples and oranges argument.

    I’m not criticizing CPOF per se, I’m must pointing out that it, and the whole Battle Command suite, are “at the head of the tail.” I’m arguing that the Army intentionally enable the long end. For the 100 or so possible users of a CPOF instance it might be quite valuable and as you argue, might be quite flexible, but it’s not a generative platform. Even if, as you suggest, it could be valuable to the civilian world it isn’t available in any readily accessible way because Commotion/Mayaviz has been locked up by GD. It’s not available as a runtime platform ala Google App Engine nor is it available as an open source platform. Because of these factors it will probably remain a niche play.

    The technology of COIN isn’t really that interesting, but that’s not the point. The point is that that Linux box did serve as a generative platform and for almost no money has often served as many as 5000 users in a week, helping them solve mundane but irritating and time consuming problems.

  • http://basiscraft.com Thomas Lord

    Jim:

    Ok, I’ll come back just to say (clarify) that I am completely behind (and way ahead of you! by over a decade!) on advocating for “generative platforms” but that it’s my very long term contemplation and work on the topic that leads me to not rush to particular technology just because it is vaguely illustrative of the concept. Early mistakes are hard or impossible to correct later and the basis set of “generativity” deserves very careful and carefully organized attention.

    It’s a huge topic and extends way beyond just the army. The LAMP stack and Javascript-in-browsers should have taught, by now, to the capitalists, the concept of a generative environment (just as the Lisp fathers before us learned the concept) but – you actually need to think through the kernel of such a system or else it comes out flaky and with a very limited life cycle (like most of “the web” today).

    (And, yes, the whole creation of the “Web 2.0″ meme is a case of Tim “getting it” in the specific, flawed example of a few particular technologies – so, some of the capitalists are making progress, at least. Some day I should tell you more about the original vision of the GNU system as a highly improvisationally-extensible (generative) system with a more systematic approach than the current “Web 2.0″….)

    -t

  • http://www.rhizalabs.com Josh Knauer

    Jim- I’d respectfully suggest that you look deeper at CPOF than just the Sorenson presentation. It is in fact a generative platform that presents a palette of tools that enable users to build their own applications, change workflows, and collaborate in a way that no other application I’ve seen does. The proof has been in the pudding… users have been able to run with the platform and produce applications that were never contemplated by the people who built it.

    Sorenson’s description of it did not really do it justice and did not focus on the deeper impacts that it has had. Based on his presentation alone, I don’t blame you for coming to the conclusions that you did. There’s a lot more to the story, however, than one person’s short presentation.

    I believe that the military needs more than just random hacking projects based on shaky APIs and scattered efforts. Web services, for example, may not be the best type of platform to rely on when you’re in an environment with very poor and intermittent bandwidth. A more systematic approach has to be taken, but as you correctly point out, those approaches should allow for users to be able to modify existing and build new applications that are tuned to their needs. The stunning thing about CPOF is that it is built on a completely new type of data architecture (p2p) that does very well in such environments. It is a more systematic approach to opening up the information space within the military that will long outlast random hacking efforts of people working with one or two servers and chewing gum.

    As an aside, this award was just announced: U.S. Army Command Post of the Future (CPOF) takes the “Outstanding Government Program” of the year award at the recent Institute for Defense and Government Advancement (IDGA) 2009 Network Centric Warfare Awards™ (NCW) ceremony.

    -josh

  • Jim Stogdill

    @ Josh

    Thanks for your comment. I’d love to dig deeper and understand CPOF better. If you can connect me with people that can give me a closer look I’d appreciate it.

    Regarding your comment “…just random hacking projects based on shaky API’s…” I really think that kind of thinking is unnecessarily limiting. First because it sort of hovers over the assumption that anything that isn’t a multi-tens-of-millions dollar project is somehow “just hacking.” Second because it doesn’t acknowledge the reality that there are all kinds of problems that need solving over a continuum from highly mission critical (e.g. nuc weapon c2) to costly but not all that mission critical. My whole argument in a nutshell is that enabling random hacking projects is exactly what the DoD needs.

    It’s worth noting (again) that I wasn’t trying to denigrate CPOF, or the Pittsburgh tech community from which it sprang. I can’t help but notice Pittsburghers running to its defense. I thought you guys would be too busy celebrating a Super bowl victory to worry about someone talking about CPOF. :) In any case, my broader point remains – I believe the DoD should be intentionally enabling emergence behaviors and solutions.

  • http://technosailor.com Aaron Brazell

    Great post, great comment, great discussion. Nothing to asdd except to say that if you don’t have a clearance of unknown level/compartmentalization, I doubt you’re gonna have a chance to look at the guts of CPOF API. :)

  • Lori

    I had the opportunity to use the CPOF while i was in Iraq last year. It is far more useful and reliable than some give it credit for. I think the army should take advantage of its younger generation in regards to knowing the capabilities currently technology has. There are so many more reliable and useful opportunities, no offense to the older generation, that aren’t being taken advantage of because of the lack of knowledge by those making the decisions.

  • Dean

    Hey everyone, I am an Instructor for Army Battle Command Systems (ABCS). Jim, I am going to have to agree with both Chris and Lori, CPOF is being used to enhance and multiply available assets in both Iraq and Afghan. I have talked to soldiers that have used it and one told me he attributed part of the success of the surge in Iraq to CPOF’s ability to dynamically share info and collaberate over hundreds of miles. Though nothing is perfect I can safely say that over my 22 years of military and civilian experience that we are headed in the right direction.

  • Jim Stogdill

    Dean and Lori – thanks for you comments. I agree with you that CPOF has been a valuable addition and it is definitely directionally on target. I would just love to see continued emphasis of extensible, emergent, and generative platforms with much higher approachability.