"enterprise" entries

DevOps in the enterprise

Follow Nordstrom's journey to continuous delivery and a DevOps culture.

DevOps in Practice CoverWould you open an email with a subject line of DevOps and pants? I’m not sure I would.

Six months ago I sent Rob Cummings an email with exactly that subject and he did. And we can be thankful he opened it, because by doing so, he invited us to look back at the fascinating history of Nordstrom’s implementation of continuous delivery and a “DevOps culture.”

The story begins in 2004, in a different era of web operations and performance. Back then, Rob and his team drove out to the colocation facility to deploy the e-commerce site. It was an era in which everything was a bit more heavyweight and things moved a bit slower. But that was OK, because most companies were still figuring the web out, almost as much as users were trying to figure it out.

Then the world started changing. Customer expectations changed. The business’ expectations changed. Heck, even developer expectations changed. As a leader in Nordstrom’s operations department, Rob had to adapt. And all of this was complicated by the fact that the increased pace was starting to strain his team and the systems he was responsible for maintaining. Read more…

Comment

Working like a startup at IBM

How a small and passionate team used modern techniques to shift a business on a short timeline.

http://commons.wikimedia.org/wiki/File:PSM_V04_D595_Blue_yellow_color_mix_experiment.jpg

Over the past year, I assisted in creating an application that helped shift a major part of IBM to a software-as-a-service (SaaS) model. I did this with the help of a small but excellent development team that was inspired by the culture and practices of web startups. To be clear, it wasn’t easy – changing how we worked led to frequent friction and conflict – but in the end it worked, and we made a difference.

In mid-2013, the IBM Service Management business and engineering leaders decided to make a big bet on moving our software to the cloud. Traditionally we have sold “on premises” software products. These are software products that a customer buys, downloads, and installs on their own equipment, in their own data centers and facilities. Although we love the on-premises business, we realized that cloud delivery of software is also a great option, and as our customers evolved to a hybrid on-premises / cloud future, we needed to be there to help them.

Read more…

Comments: 4
Four short links: 9 January 2014

Four short links: 9 January 2014

Artificial Labour, Flexible Circuits, Vanishing Business Sexts, and Themal Imaging

  1. Artificial Labour and Ubiquitous Interactive Machine Learning (Greg Borenstein) — in which design fiction, actual machine learning, legal discovery, and comics meet. One of the major themes to emerge in the 2H2K project is something we’ve taken to calling “artificial labor”. While we’re skeptical of the claims of artificial intelligence, we do imagine ever-more sophisticated forms of automation transforming the landscape of work and economics. Or, as John puts it, robots are Marxist.
  2. Clear Flexible Circuit on a Contact Lens (Smithsonian) — ends up about 1/60th as thick as a human hair, and is as flexible.
  3. Confide (GigaOm) — Enterprise SnapChat. A Sarbanes-Oxley Litigation Printer. It’s the Internet of Undiscoverable Things. Looking forward to Enterprise Omegle.
  4. FLIR One — thermal imaging in phone form factor, another sensor for your panopticon. (via DIY Drones)
Comment

The Feedback Principle

Gracefully maintain a desired value in the presence of uncertainty and change

In a previous post, we introduced the basic feedback concept. Now it is time to take a closer look at this idea.

Feedback is a method to keep systems on track. In other words, feedback is a way to make sure a system behaves in the desired fashion. If we have some quality-of-service metric in mind, then feedback is a reliable method to ensure that our system will achieve and maintain the desired value of this metric, even in the presence of uncertainty and change.

Read more…

Comment: 1

What is an enterprise, anyway?

However one defines "enterprise," what really matters is an organization's culture.

This post was co-authored by Mike Loukides and Bill Higgins.

Bill Higgins of IBM and I have been working on an article about DevOps in the enterprise. DevOps is mostly closely associated with Internet giants and web startups, but increasingly we are observing companies we lump under the banner of “enterprises” trying — and often struggling — to adopt the sorts of DevOps culture and practices we see at places like Etsy. As we tried to catalog the success and failure patterns of DevOps adoption in the enterprise, we ran into an interesting problem: we couldn’t precisely define what makes a company an enterprise. Without a well understood context, it was hard to diagnose inhibitors or to prescribe any particular advice.

So, we decided to pause our article and turn our minds to the question “What is an enterprise, anyway?” We first tried to define an enterprise based on its attributes, but as you’ll see, these are problematic:

More then N employees
Definitions like this don’t interest us. What changes magically when you cross the line between 999 and 1,000 employees? Or 9,999 and 10,000? Wherever you put the line, it’s arbitrary. I’ll grant that 30-person companies work differently from 10,000 person companies, and that 100-person companies have often adopted the overhead and bureaucracy of 10,000 person companies (not a pretty sight). But drawing an arbitrary line in the sand isn’t helpful.

Read more…

Comments: 7

Do you need a data scientist?

Data science is hard but it isn’t dark magic.

The question “do you need a data scientist?” came up a lot when I was a management consultant for a global firm that successfully incubated data science within a few enterprise organizations. It’s hard. The discussion is hard and the culture clash for data scientists is hard. Many approach data science as some dark magic from Hogwarts. It’s not. Investigating a hypothesis takes time. Spontaneously generating data and building a model against that data doesn’t work. Understanding who you need and how they will fit into your organization is challenging. Where do we put them? Who do they interact with? What is the hand-off? Who do we structure around the project? How do you execute a project? Even better, how do we make MONEY? Yet, before we go there, perhaps we should step back a bit and think of this as a strategic question. Because maybe you do need a data scientist and maybe you don’t.

Read more…

Comment: 1

Smuggling Web Practices into the Enterprise

How fast can the enterprise change?

At last year’s Fluent Conference, I kept having the same conversation with attendees from large companies. They had come to the show with a mandate from their bosses to figure out how to bring that fast-moving web work into their slow-moving enterprise systems.

I enjoyed some of that same conversation this year, but also a different note: even with management support, making that transition was difficult. Some parts meshed, others were difficult: changes in one place could reverberate through many others. The whole concept of “rapid prototyping” fit badly with a variety of technologies and approaches meant to minimize unpleasant surprises. Even eight years after the advent of Ajax, a variety of server-centric techniques limit the flexibility of front-end developers.

Someone at lunch said that “the technology helps, but the culture matters.” A few others talked about how everyone wanted better front-end work, but thought it could be grafted easily on existing back-end practice. The shiny parts are easy to talk about, but the plumbing is harder.

I was happy to see Bill Scott (@billwscott) of PayPal take on these challenges in his keynote. Bill wasn’t smuggling anything—that would be difficult under the title “Clash of the Titans: Releasing the Kraken.” He was brought to PayPal to change the company, to bring the lean “build – measure – learn” approach. In a risk-averse world, with “a 20-day class on how to use their version of Spring,” Scott had to change the “culture of a long shelf life” (something publishing folks are starting to do as well).

It’s a hard-hitting talk, calling for major change, skunkworks projects, and shifts in both company culture and technology.

Read more…

Comment

eZ Publish: A CMS Framework with Open Source in Its DNA

Leading eZ Publish advocates look at what lies ahead for CMS programmers and users

ez-publish-1

There are a variety of options when it comes to content management. We’ve explored Drupal a bit, and in this email interview I talked to some folks who work with eZ Publish. It is an open source (with commercial options) CMS written in PHP. Brandon Chambers and Greg McAvoy-Jensen talk about how the platform acts as a content management framework, how being open source has affected the project, and what we should expect to see coming up for CMS in general.

Brandon Chambers is a Senior Developer at Granite Horizon, an eZ Publish integrator. He has 14 years of web development experience focused on open source technologies such as PHP, MySQL, Python, Java, Android, HTML, JavaScript, AJAX, CSS and XML.

Greg McAvoy-Jensen is a member of the eZ Publish Community Project Board. He also founded and is the CEO of Granite Horizon, and has been developing with eZ Publish since 2002.

Q: What problems does eZ Publish solve for users?

A: eZ Publish grew up not just as a CMS, but as a content management framework. It sports a flexible and object-oriented content model (an important early decision), and provides developers an MVC framework as a platform for building complex web applications and extending the CMS. Like any CMS it makes content publishing accessible for the non-programmer, and provides an easy editorial interface. eZ Publish does a fine job of separating content from presentation and providing reusability and multi-channel delivery. It targets the enterprise more than smaller organizations, so the software quality remains pegged at high standards, and high degrees of flexibility and extensibility continue to be required.

Q: How you feel being open source has affected the project?

A: Fourteen years on, eZ Systems is still firm that open source is in its DNA. This foundational commitment created a culture of sharing, and it attracts developers who prefer to share their code and to collaborate with others outside their organization for the benefit of their customers. Contributions flow in as both extensions and core code pull requests. The commercial open source model, similar to Red Hat’s, means the vendor takes primary responsibility for code maintenance and development, and derives its profit from support subscriptions, while leaving customizations to its network of certified partners. Because the source is open, organizations evaluating the software can have their developers compare the code of, for example, eZ Publish and Drupal, and make their own determinations. This, in turn, keeps the vendor accountable for the code: eZ engineers program knowing full well that the world can see their work.

Q: What distinguishes eZ Publish from other CMS options?

A: While there may be a thousand or so CMS’s around, analysts typically look at something more like 30 that are important today. eZ Publish fits into that group, most recently by inclusion on Gartner’s Magic Quadrant beginning in 2011. Not all open source CMS’s have a vendor behind them who both provides support and has full control over the code, a level of accountability required in enterprise applications. eZ is a great fit for particularly complex implementations, or situations where there is no assurance that future needs will be simple. And despite the complex customizations developers do with eZ Publish, they rarely interfere with upgrades.

eZ’s engineers recently became dissatisfied with the merely vast degree of flexibility they had built into the MVC framework, so they’ve now moved the whole system on top of the Symfony PHP framework. eZ Publish is now a native Symfony application, the only CMS to utilize Symfony’s full stack. This leverages the great speed and excellent libraries Symfony provides, and makes eZ easier to learn by those who are familiar with Symfony. Some CMS’s require many plug-ins just to get a basic feature set going on a site, but eZ Publish has long included granular security, content versioning, multi-language support, multi-channel/multi-site capability, workflows, and the like as part of the kernel.
Read more…

Comment

Doug Hanks on how the MX series is changing the game

Doug Hanks on how the MX series is changing the game

Doug Hanks (@douglashanksjr) is an O’Reilly author (Juniper MX Series) and a data center architect at Juniper Networks. He is currently working on one of Juniper’s most popular devices – the MX Series. The MX is a routing device that’s optimized for delivering high-density and high-speed Layer 2 and Layer 3 Ethernet services. As you watch the video interview embedded in this post, the data is more than likely being transmitted across the Juniper MX.

We recently sat down to discuss the MX Series and the opportunities it presents. Highlights from our conversation include:

  • MX is one of Juniper’s best-selling platforms [Discussed at the 0:32 mark].
  • Learn if the MX can help you [Discussed at the 1:00 mark].
  • What you need to know before using the MX [Discussed at the 6:40 mark].
  • What’s next for Juniper [Discussed at the 9:39 mark].

You can view the entire interview in the following video.

Read more…

Comment

Six disruptive possibilities from big data

Specific ways big data will inundate vendors and customers.

Disruptive PossibilitiesMy new book, Disruptive Possibilities: How Big Data Changes Everything, is derived directly from my experience as a performance and platform architect in the old enterprise world and the new, Internet-scale world.

I pre-date the Hadoop crew at Yahoo!, but I intimately understood the grid engineering that made Hadoop possible. For years, the working title of this book was The Art and Craft of Platform Engineering, and when I started working on Hadoop after a stint in the Red Hat kernel group, many of the ideas that were jammed into my head, going back to my experience with early supercomputers, all seem to make perfect sense for Hadoop. This is why I frequently refer to big data as “commercial supercomputing.”

In Disruptive Possibilities, I discuss the implications of the big data ecosystem over the next few years. These implications will inundate vendors and customers in a number of ways, including: Read more…

Comment: 1