- User Onboarding Teardowns — the UX of new users. (via Andy Baio)
- Microsoft’s Productivity Vision — always-on thinged-up Internet everywhere, with predictions and magic by the dozen.
- Machine Learning Done Wrong — When dealing with small amounts of data, it’s reasonable to try as many algorithms as possible and to pick the best one since the cost of experimentation is low. But as we hit “big data,” it pays off to analyze the data upfront and then design the modeling pipeline (pre-processing, modeling, optimization algorithm, evaluation, productionization) accordingly.
- Ten Simple Rules for Lifelong Learning According to Richard Hamming (PLoScompBio) — Exponential growth of the amount of knowledge is a central feature of the modern era. As Hamming points out, since the time of Isaac Newton (1642/3-1726/7), the total amount of knowledge (including but not limited to technical fields) has doubled about every 17 years. At the same time, the half-life of technical knowledge has been estimated to be about 15 years. If the total amount of knowledge available today is x, then in 15 years the total amount of knowledge can be expected to be nearly 2x, while the amount of knowledge that has become obsolete will be about 0.5x. This means that the total amount of knowledge thought to be valid has increased from x to nearly 1.5x. Taken together, this means that if your daughter or son was born when you were 34 years old, the amount of knowledge she or he will be faced with on entering university at age 17 will be more than twice the amount you faced when you started college.
WebAssembly – wasm – skips that final step, producing a binary format, technically a compressed AST encoding. Unless you’re going to be building compilers, you can compare wasm to a bytecode system. There is a text format for debugging, but the binary emphasis yields substantial extra speed as it skips parsing and minimizes decompression.
Microsoft .Net Team Program Manager, Beth Massi, on the open source .NET Core.
You might have heard the news that .NET is open source. In this post I’m going to explain what exactly we open sourced, why we did it, and how you can get involved.
If you’re not familiar with .NET, it’s a managed execution environment that provides a variety of services to its running applications including things like automatic memory management, type safety, native interop, and multiple modern programming languages that make it easier to build all kinds of apps, for nearly any device, quickly. The first version of .NET was initially released in 2002 and quickly picked up steam in many businesses. Today there are over 1.8 billion active installs of the .NET Framework and 6 million .NET developers in the world.
The .NET Framework consists of these major components: the common language runtime (CLR), which is the execution engine that handles running applications; the .NET Framework Base Class Libraries (BCL), which provides a library of tested, reusable code that developers can call from their own applications; and the managed languages and compilers for C#, F#, and Visual Basic. Application models extend the common libraries of the .NET Framework to provide additional libraries that developers can use to build specific types of applications, like web, desktop, mobile device apps, etc. For more information on all the components in .NET 2015 see: Understanding .NET 2015.
There are multiple implementations of .NET, some from Microsoft and others from other companies or open source projects. In this post, I’ll focus on .NET Core from Microsoft.
Complement a good testing program and identify hard-to-find bugs with static analysis.
Let’s consider compiler warnings. They are produced without executing the code, so the compiler is doing static analysis. Their aim is to inform the developer that the code, though legal, is probably wrong. Suppose you were a compiler developer and you wanted to add a new warning; what characteristics must that warning have?
- There must be some statically identifiable pattern to the suspicious code.
- The pattern must be common and plausibly written by a developer; developing a warning for a too-rare pattern or completely unrealistic code is effort that could be better spent on other features.
- The warning must have a low “false positive” rate; a warning must actually identify defective code more than, say, 99% of the time. False positives encourage developers to eliminate the warning by turning the warning off, or worse, by incorrectly changing the code. There must be a way to eliminate the warning without introducing a bug into the code.
- The pattern must be identified extremely Slowing the build process by anything more than a few percent is unacceptable.
I always recommend that everyone use the strictest warning settings on their compiler, to pay attention to warnings, and to (carefully) fix them all. Even fix the false positives; if the code was weird enough to fool the compiler then it’s weird enough to fool a human, and you don’t want to have “expected” warnings distracting you from actual warnings.
Identifying the key requirements of a web application cloud architecture.
Download a free copy of “Azure for Developers,” an O’Reilly report by experienced .NET developer John Adams that breaks down Microsoft’s Azure platform in plain language, so that you can quickly get up to speed.
One of the most natural uses of the cloud is for web applications. You may already be using virtual machines on your own systems to make deploying your applications easier, either to new hardware or to additional servers. Microsoft Azure uses virtualization too, but it also brings useful benefits that virtualization cannot deliver alone. By hosting your application in the cloud, you can leverage automatic scaling, load balancing, system health monitoring, and logging. You also benefit from the fact that managed cloud platforms help narrow the attack surface of your system by automatically patching the operating system and runtimes and by keeping systems sandboxed. Let’s look at some examples of how to build some common web applications inside of Microsoft Azure.
Imagine that you work for a retailer who generates a significant amount of revenue through online sales. Imagine also that this retailer has been around for long enough that it already has an established web architecture that runs in a private data center. This retailer has decided that it wants to move to a hosted platform so that it no longer has any data center responsibilities and it can focus on its core business. How do you replatform this web application into Microsoft Azure? Let’s first identify some requirements for this system:
- It has high utilization and needs to serve a large number of concurrent users without timing out, even during peak hours such as Black Friday sales.
- It needs to accommodate a wide variety of products in its database that do not necessarily all follow the same schema.
- It needs a fast and intelligent search bar so that customers can find products easily.
- It needs to be able to recommend products to customers as they shop to help generate additional revenue.
However these requirements are being met today in the private data center, I can suggest some guidelines on how to reproduce this system in Microsoft Azure so you can boost performance instead of just replicating it. I will take each of these requirements in order and explain how to leverage certain Azure components so that these requirements are properly met.
Microsoft, Google and pushing business models too far.
I realized yesterday, though, that:
- Microsoft ruined their brand for me by holding too tightly to things that they considered theirs. (Software.)
- Google is ruining their brand for me by holding too tightly to things that I consider mine. (Identity, everything they can possibly learn about me.)
It’s a weird difference, but the Google version makes me much sadder about the world. As I’d tell a mugger, “You can have my wallet, just don’t take me.”