<![CDATA[Hardcore Software by Steven Sinofsky]]>https://hardcoresoftware.learningbyshipping.comhttps://substackcdn.com/image/fetch/$s_!5uiB!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F4ab797f1-169e-4875-883d-2ad97d37e253_326x326.pngHardcore Software by Steven Sinofskyhttps://hardcoresoftware.learningbyshipping.comSubstackSat, 21 Feb 2026 22:39:28 GMT<![CDATA[238. Death of Software. Nah.]]>https://hardcoresoftware.learningbyshipping.com/p/238-death-of-software-nahhttps://hardcoresoftware.learningbyshipping.com/p/238-death-of-software-nahWed, 11 Feb 2026 22:15:22 GMTToo much of what is going on with AI is a race to get to a theoretical end state of a whole new world of business and technology. Many pushing this were around for the tail end of the transition to an internet-based economy and are struggling with what appears to be a “compressed timeline,” much the way people record disruption as a “moment” versus a journey.

I’ve been thinking about where we are today with AI and the transition to this next wave, generation, platform of computing, and how it might look like previous transitions but really isn’t. I’m thinking about three different transitions: the transition to the PC and graphical interface, the shift to online retail, and the pivot to streaming.

AI changes what we build and who builds it, but not how much needs to be built. We need vastly more software, not less.

Image

Subscribe now

PC and Graphical Interface

The paradigm shift to the PC and graphical interface is a wild one when looked at in context. The most important thing about the PC is that the first predictions were de minimis, followed by the prediction that it would eliminate mainframe computing and the data center. HAHA. Everyone was wrong all around.

First, the run up to the internet saw the installed base of PCs go from under 100M to nearly 1B. PC growth was insane. But in the process, the rise of data centers happened along with that. Why? Because people first connected their PCs to data centers with mainframes, and still do, ask IBM, and then just replaced the hardware in data centers with PC hardware. No one expected the PC to innovate and evolve this way.

Second, the graphical interface enabled the desktop, but the character interface, CLI, not only didn’t go away, it remained the underlying architecture of the entire future of the platform, starting with the cloud and extending to iPhone and Android. So here we are today where the most widely used new computer interaction is via a command line for every type of user, end user, developer, IT professional, etc.

The lesson here is that whatever the world thought would end just wound up being vastly larger than anyone thought. And the thing that people thought would forever be replaced was not simply legacy but became a key enabler.

The other aspect of this is, of course, that entire new companies were created in the process. Google, Meta, Amazon AWS, Salesforce, and more. Some companies remained incredibly strong, doing what they had always done while transitioning to new ways of doing those things, like SAP. And some companies like Microsoft, Dell, and Apple retained what they did but in entirely new and wildly successful ways. Yes, a lot of companies along the way, from EMC to Sun to Lotus and a seemingly very long list, did not make it. Such is Schumpeter.

Retail

No area of the economy was so large and yet under so much pressure during the internet run up as the “imminent death” of retail and the consternation over how “low margin” and “inefficient” retail would be subsumed by people clicking to have things magically appear. There was a new world a heartbeat away in 1995.

I remember one tech conference where the then, soon to step down, Walmart CEO was completely under the microscope of a tech-centric audience who simply believed Walmart was dead in the water against Amazon. Around this time is when existing retail leaders began to talk about a thing called “omnichannel,” which meant you could buy online, in a store, or a combination, order online pickup, and Walmart logistics would rule the day. Everyone was skeptical because Amazon was a logistics monster and Walmart had nothing but old-style SAP and mainframes. The Street wanted WMT to break out the financials of different channels to prove the future was online.

By 1999 the world had started to have enough and people were asking, “where’s the profit?” Suddenly there was an investor shift to companies that were stable retailers.

Amazon.com was famously labeled “Amazon Dot Bomb,” where no one had tolerance or patience for Bezos to deliver on all the investments. WMT was mostly ahead of AMZN until about 2005 when AWS appeared. Then the Street had a lot of twisting over whether to think about Amazon as two companies, whether one was funding another, etc.

Well here we are today with trillion-dollar retailers, both of whom are monsters, executing extremely well, serving customers, and dominating the world of retail in completely different ways. It is incredible.

And the rest of retail? It went through the same cycle that retail has been going through for literally 100 years. Transitions from store and brand formats of one kind to another. From small standalone to department to malls to big box to online, from mega brands to generics to niche online brands. Retail has and will always be in a constant state of flux for many categories because it is fundamentally about taste, logistics, and assortment. Retail gonna retail.

As with the PC, there are entirely new companies in retail, and more every day, and of course old mega companies and new mega companies. There are brands that survived, brands that thrived, and brands that were created. There were brands that were created but also crashed out. Schumpeter continues to rule.

The timeline for this was wildly longer than the 1995 predictions, the 2000 crash, or even the 2020 pandemic craze suggested. Literally an entire professional career worth of transition. And of course, it is still happening. Patience is a great teacher.

Media

No area of the economy has demonstrated more doom, gloom, and then resurgence in completely unpredicted ways than media as broadly defined, news, sports, long- and short-form video, music, and personal. It was “super clear” from the early days of the World Wide Web to those in the know that we were dealing with an entirely “new media” landscape.

From the first days of Netscape to 2000, the rush was on to build “new media” assets. This was everything from replacing the cable that connected to your house for TV to how we read the daily news to how we are entertained. “Content is king,” they shouted from magazine covers.

Then a modem-based online network bought the most analog of all mega media companies in the AOL-Time Warner merger. Suddenly we were seeing “You’ve got mail” in movie theater walk-in videos and at the start of films. It was insane.

Of course during this time the news would die. New tech-centric companies would aggregate, scrape, or simply license media so as not to deal with all that messy content creation, staffing, and cost. Everything good would be a click away. Music would be free with Napster.

The iPod brought hardware to media and also brought economics. Suddenly consuming media was going to be microtransactions and we’d all carry every bit of media with us all the time, all the media ever created. But the iPod would never have a screen, Jobs told us.

Then came UGC or user-generated content. That was going to obliterate all those professionals and distributors. Suddenly we were post-crash, a crash that no one predicted, and had a whole new set of companies and predictions while we watched “legacy media” and “dead tree” media wither away. There were hundreds of companies where anyone could post anything. This was the new world. Micro communities, social, and then came the phone to pull this all together.

The most legacy of all media types was the DVD, and Netflix had become this wildly popular way to go to a website, get a rental DVD and later mail it back, no rewind required. Every corporate mailroom and apartment mailbox was a sea of red envelopes every Monday morning. So many companies were trying to be software to bridge the physical world. Do something online with a website and the company would find a way to connect the physical world. Magazines and newspapers were offering online and offline content. It sounded like omnichannel all over again.

But through all this, once again, major companies found new ways to do what they always did, which is focus on creativity, visualization, and storytelling just on a new internet platform. News and writing figured out ads and subscriptions. Social networks learned people wanted to do more than share micro updates about breakfast. The process of creative destruction was taking place.

Importantly, we were now at least a decade after the initial predictions that everything would be laid to waste by a web browser on a computer at work. Now we thought about phones. We thought about streaming.

Netflix was now making stories. The DVD business was their legacy business. Then HBO started streaming. Everyone got even more content and more quality than ever before. It was a crazy explosion of creativity and output. There was more news than ever before. There were people paying more for online news than they ever paid for static print news. Some players were stronger. Some were weaker or gone. And most importantly there were many new players doing new things in new ways, like Netflix. What person can imagine waiting seven days or an entire summer for the next episode of a show? Who could think of not being able to binge on a download on a plane or have any book stream through your headphones? Audiobooks were a super weird niche before the Kindle and before online.

There is vastly more media today than there was 25 years ago. In fact, some argue we have a new modern surplus. Any graph of available media shows a crazy amount across every category. Some categories are different and companies change, but that’s a given. Schumpeter is undefeated so far.

So…

In each of these examples you can be someone who was there and claim “told you so,” or you could be someone who placed all their bets on “whole new world,” or somewhere in between.

In reality, almost all the people that called for full disruption and a whole new world were wildly optimistic on the timeframe. The thing is, the world needs these people because the transition is not a straight path. It is not “this is what we know needs to be built and now build it.”

Instead, what is absolutely part of this whole arc are people who are certain we are less than five years away and are in a rush to build with absolute belief in where things are heading, and people who support them with their labor or dollars. They are more worried about having already missed his opportunity as Marc Andreessen has often said upon arriving in Silicon Valley in 1994. Their success is awesome. Their failure teaches the broad community lessons. That’s also why the network and community of SV was so key.

The people that are certain what is going on is forever away, or even worse never going to happen, well they are important too. Because the transition takes so long and is so unevenly distributed, having people focused on legacy is key. Without all those people working on mainframes at IBM we would never have had online travel or banking, because that whole industry runs on that iron.

Wall Street is filled with investors of all types. There’s also a community, and they tend to run in herds. The past couple of weeks have definitely seen the herd collectively conclude that somehow software is dead.

That the idea of a software pure play will just vanish into some language model.

Nonsense.

Here’s what will happen:

1. There will be more software than ever before. This is not just because of AI coding or agents building products or whatever. It is because we are nowhere near meeting the demand for what software can do. This holds for software I use on my own, software a business needs, software an organization needs, or software to control the explosion of devices that replace every analog device with an automated one.

2. AI-enabled or AI-centric software is simply moving up the stack of what a product is. Software did not create online banks. Banking always required software. Software that faced a consumer, banking or traveling or shopping or reading or viewing, just became an essential part of the bank, travel, etc stack. Sometimes this created new from-scratch companies and sometimes it created new companies inside old ones. Industries were restructured as assets moved around. However big and complex you think a legacy business is today, it will be vastly larger and more complex tomorrow, and it will do vastly more. Think I’m crazy? Consider what banking was like in 1995. If you have any experience you know your choices, features, options, etc were one-thousandth of what you have today, even if fundamentally you got a paycheck, paid your bills, and might have had a credit card.

3. New tools will be created with AI that do new things. The number of processes and experiences in work and life that are not yet fundamentally improved by software is far greater than the number that have been improved by software. Not just big things, like everything in your home, but the whole nature of work, collaboration, transportation, and more. There are so many new inventions that will happen because of autonomy and robots it is hard to even comprehend. However many tools you think exist today, there are exponentially more that will exist.

4. Domain experience will be wildly more important than it is today because every domain will become vastly more sophisticated than it is now. This is not just because service providers and builders have better tools, but because customers do as well. I will never forget how bankers said they were going to be so much more efficient by having a few college kids use spreadsheets while they did all the hard work of deal-making. Little did they know that to even be a banker by 1995 you were going to be the person using the sheet to build a model. Repeat for consultants. Graphic artists. Writers. Lawyers. Doctors. And every single domain. This is going to happen all over again. Yes, some supporting jobs in domains went away, but they were not just replaced with even more skills, they were replaced with many more people. More people work in more banking locations today, per capita, than ever before. There’s more expertise required in every domain, even with three decades of automation.

5. Finally, it is absolutely true that some companies will not make it. It is even true that in some very long time, longer than a career or generation, every company will be completely different or their product line and organization will have dramatically changed. This will not broadly happen on any investing timeline. Look at today’s retail and compare it to 2000. Look at large media companies. Look at the world of computing.

Strap in. This is the most exciting time for business and technology, ever.

Author notes: an LLM was instructed to provide a “light proofread only, with minor changes for spelling, grammar, and clarity. Do not delete content or change meaning. Preserve the tone, rhythm, and point of view.” Any emdashes were in the original :-)

This post originally appeared as an X article and then on the a16z substack. Just keeping everything here too.

]]>
<![CDATA[237. CES 2026 – “The Future Is Here” and “Innovators Show Up” #CES2026 ]]>https://hardcoresoftware.learningbyshipping.com/p/237-ces-2026-the-future-is-here-andhttps://hardcoresoftware.learningbyshipping.com/p/237-ces-2026-the-future-is-here-andSun, 11 Jan 2026 23:00:56 GMTCES is a trade show and not a broad consumer event. It just so happens that it attracts so many in the industry that it has often been covered as a broad interest news event. My sense is the coverage was somewhat muted this year (judging by the extremes of X and CNN) but I have a theory as to why. The show had on display extremes of innovation. On the one hand, every day consumer tech from televisions, vacuums, and computers had a great show of substantially improved products, but because we all have these and “new and better” isn’t going to get us to the store lining up at midnight it doesn’t get a lot of coverage. On the other hand, autonomy, robots, and even artificial intelligence broadly are truly some of the most amazing innovations of my lifetime, but the products are still finding their way so you’re not going to run out and buy those either.

This is the new main entrance that unifies the entire LVCC into a north, central, and south hall. It was a long decade of construction and is a massive improvement for the very large CES show.

Maybe you must take my word for it, but this year’s CES had a lot of extremely solid advances in things that matter and at the same time the progress is more than just tangible. The old saying “the future is here, it just isn’t equally distributed” applies. For many at the show, seeing the Amazon Zoox picking up and dropping off people on the Strip was mind-blowing. It was just last year that these bubble cars with no steering wheel were just on display on a show floor.

A white and black vehicle on a street

AI-generated content may be incorrect.

The show was big this year, up over 2025 with 4100+ exhibitors and 148K+ attendees, including some 6900+ media (according to CTA) that compares to 2025 of 4,500 exhibitors, 141K+ attendees, and some 6,000+ media.

The number of exhibitors might catch your attention. My sense is there has been somewhat of a restructuring of the show. There’s always been a tension for big companies over attending with a big presence or just sending the right bizdev people to meet privately in hotel rooms. There was some noticeable shifting this year. Samsung and Sony both left the central hall, as did Panasonic to some degree (making a smaller, boxed in theater-only booth) which freed up a lot of space and dramatically improved the flow of people in what was historically a completely jammed and miserable entrance experience. Samsung set up their own convention within a convention at a hotel complete with reservation times. It was a wonderful experience where you could really see stuff and even breathe at the same time.

The PC industry has long been in retreat at the show. Dell and Lenovo have done primarily small private events for years and even those went away. Instead, this year Lenovo did a monster keynote as The Sphere and had signage everywhere for what they were talking about. I didn’t have the energy for a special trip.

Nvidia followed the Samsung model and at a different hotel set up their own exhibit space. It was branded as a CES themed “The Foundry” which makes a lot of sense. In my view Nvidia is now the underlying technology of the whole of CES the way that “home theater” was for the first generation, then the computer, then the internet, then phones. More on Nvidia below.

With Sony’s absence we also saw Nikon and Canon both pull back. This meant that “image capture” was not represented at the show. There are other places for this such as the NAB show for broadcast or Photokina in Germany. But this made me sad since I love photography.

For the past decade or so, CES was almost an electronics show combined with a car show, a mini-Detroit Auto Show held in the North Hall. The car makers had huge booths with exotic cars or prototypes, and the hall was filled with a vast array of after-market products for car enthusiasts, all of which was difficult to navigate and often only borderline tasteful. The rise of EVs and Tesla brought a wave of high technical car components aimed at car makers from platform batteries, charging infrastructure, and electric motors. Then with autonomy came all the components for self-driving such as LiDAR. By and large all this has been replaced by the broad category of mobility, which for a while meant an over-abundance of scooters. This year the emphasis of mobility was on full autonomy and a bit of general mobility. These changes bring more focus to the show, and the North Hall was once again traversable.

A group of people in a large room

AI-generated content may be incorrect.
A panorama of some country booths from the city of Ontario (as part of an all Canada section), Turkey, Korea (one single Provence), and Romania.

The ever-expanding role of international or country level booths continued. These booths are when some country level agency such as a ministry in charge of science policy or an economic development rents a big chunk of floorspace and then curates startups or other companies to attend. This is an amazing opportunity for a tiny startup in Korea, Romania, or Italy to come to Vegas and have a booth. But as an attendee these are extraordinarily difficult because “country of origin” is not a pivot you care about compared to healthcare, home security, enterprise, or whatever. Plus, navigating the whole show as a series of booths within booths is logistically impossible. That said some countries over-achieve. I lost count but there were at least two-dozen booths sponsored by one of a major Korean university, government agency, or trade group. Japan had two giant booths next to each other representing Japan but neither could tell me who the other was! There are other interesting ways floor space is allocated. For example, the AARP (American Assoc. of Retired Persons) had a large space dedicated to technology for aging and health.

On top of all that, this was the first year without a major part of the convention district under construction. Truly a remarkable decade of remodeling. The amount of super nice exhibit space that is all gracefully connected and accessible with bathrooms and places to get food is an accomplishment. Along with that there are huge numbers of brand-new hotel rooms all easily walkable from the convention center. It kind of blows my mind to think we used to come to the COMDEX show with almost twice as many people with half as much space and half the number of hotel rooms, no food anywhere and hotels were all 2 miles away.

One meta comment about “global trade” which is this year was as global as ever. I did not see or hear any commentary on global trade issues. In fact, I think there was less than last year where booths were out front saying “opening factories in <not our home country>”. There’s a broad set of new normals and it felt like everyone is working through what needs to be done for every market to have new levels of safety and self-determination.

As usual I am going to share a lot. That’s just because I love this stuff. You don’t have to love it as much as me. So here are the sections you can look for:

  1. AI and Nvidia

  2. Television and Display (includes my show favorite, Samsung 130” MicroRGB TV)

  3. PC and Tablets (includes my show favorite, Dell XPS laptop)

  4. Home, Automation, Security (includes my show favorite, Amazon Ring products)

  5. Health (includes my show favorite, Triage 360° from Omedus)

  6. Robots

  7. Mobility

  8. Lifestyle and Accessories (includes my show favorite, Roam Smart Tracker)

  9. And Finally… (includes…an appearance by Clippy)

Why not hit subscribe? It is free and I write about tech stuff that is interesting. Plus, you can read for free the archives of “Hardcore Software”, my personal story of Microsoft from some early days.

93,193 steps, about 40 miles!

AI and Nvidia

It’s been clear for quite some time we’re in the midst of a platform transition. What hasn’t been clear is what shape it will take. The progress Nvidia has made over the past year and the strategy they have has solidified my view they are not just benefiting from but setting the direct for how things move forward in both hardware and software, much more than I think most are saying. My view is that the (stock) market dynamics are making this seem like a battle where there will be a GPU winner and an LLM winner. That pre-supposes the next platform is LLM based but I think that is too narrow a definition.

The question people should be asking is who is leading in a compute platform—it is not only LLMs that are the next platform but a switch from algorithmic computing (bound by instructions per second) to data computation (based on model computations or “tokens” broadly defined) per second.

The race is no longer about MIPs, megahertz, or parameters but about computing on tokens. As with those old-style measures, tokens will have infinite demand and essentially infinite supply, and the price will continue to drop. And like those other measures the availability to compute on tokens across an entire network (the “edge”, devices, datacenters, hyperscale) means that whoever has a strategy working across all of those is the de facto leader in the platform.

A green sign with white text

AI-generated content may be incorrect.

Nvidia is the leader. The signage saying “The Next Generation Begins” is not aspirational. It is predicting the present.

Nvidia

Nvidia is executing better than any previous leader in these shifts in a more dynamic and fast-paced environment—better than IBM and mainframes, DEC/Sun in mini/workstations, better than Microsoft with Windows, or Google and the internet. They are executing at the level Apple executed from 2006-2025, but doing so in a way that drives an entire industry at every level of the stack not “only” (not trying to understate Apple, just contrast) at the app ecosystem level. Nvidia is executing like a combination of Microsoft, Intel, and Open Source all combined. It is incredible.

Nvidia brings software assets and a surrounding ecosystem at multiple levels, proprietary APIs unlike any in the market and open source contributions on par with any one else. They bring hardware from the edge to devices to hyperscale, not just compute but networking, sensors, and more. They are marshalling the “old” PC ecosystem in new ways to redefine what it means to even be a computer.

Author’s note: We bet on and with Nvidia to redefine the PC when we built the original Surface computer. While I am biased in favor of Nvidia today, I kind of feel validated. Read what it was like to Launch Windows on ARM with Nvidia at CES January 2011.

At CES 2026 Nvidia had a show within a show, first with Jensen having a pre-exhibit keynote where he among other things announced the latest GPUs are in production, and then with a large-scale set of booths at a hotel off the show floor. The square footage wasn’t huge, but the density of technology matched most of the rest of the show combined.

One aspect of the booths that really put me back at Microsoft twenty years ago was that the booths were staffed with line engineers showing off their own work. I had a half dozen conversations and listened in on a dozen more and was simply blown away. Nvidia is simply executing at the most phenomenal level with incredible talent that can easily answer rapid fire if not always on-point questions. The only other company that could do that today would be Google, but it isn’t the whole company the way it is at Nvidia. The breadth and focus are what you notice.

Some examples for me were robotics, autos, biological discovery, and of course machine learning. Each of those stations was the absolute pinnacle state of the art. And all represented real products with real partners making real progress. This was not demoware. It was like being in the Windows booth in 1996 after spending the night putting signs on every computer on the show floor proclaiming “Windows Compatible”.

Below is an example that can be seen in many contexts (security, manufacturing, etc.) of real time video processing but what you can see here is the “platformization” of this technology – the hardware, software, device integration, all coming together. It’s just one example. Another example follows which is another full-stack showing integration with robots for the purposes of ensuring safety.

A computer screen on a wall

AI-generated content may be incorrect.
A person using a computer

AI-generated content may be incorrect.

There is a separate section below on PCs/Tablets showing traditional PCs. Nvidia was showing where PCs are heading. While AMD might also play a role here and fingers-crossed that Intel can pull something off, the modern PC is being defined right in front of us. A modern PC starts from the assumption that AI compute will happen on the PC where it will be free, low latency, private, and continue to improve exponentially. The modern PC will do a lot of what we call inference today on device. And like every era the most important step is when the device can bootstrap—that is the device can be used to build itself. We’re already seeing that and it is almost certain that soon enough it will be a routine part of development to do AI coding on device.

I know some will doubt that but I also lived through people saying a PC will never be powerful enough to develop PC software (note, the first versions of Word and Excel were only debugged on Windows but cross-compiled and Windows graphical tools were not used by developers until the 1990s and even then most did not use Windows until NT). I also lived through people saying the PC would never be a powerful server, yet here we are.

With that context and building on the Nvidia announcement a year ago of the Spark DGX platform (a full stack computer running with an NVIDIA GB10 and MediaTek CPU, etc.) was this full display of DGX compatible devices from all the major OEMs. If this sort of display looks familiar it is because it is exactly what Intel was showing for laptops. This is the sign of an ecosystem bootstrapping. This year we’re seeing the tower version with a GB300 with Dell and Gigabyte announcing them. While these might be seen as developer machines or just “OEM packaging” that is how the Windows ecosystem works today. These computers can be used in labs, for deployment of apps, and for developers.

A display of electronics on a shelf

AI-generated content may be incorrect.
Intel laptops in 2026. This could have been any year. We at Microsoft used to love these displays. But in this next wave, these are lacking the hardware and software assets. It’s a new era.
A computer screen on a desk

AI-generated content may be incorrect.
This is the Nvidia DGX tower with a GB300 by Gigabyte. Dell also announced a model. The image above this one is an Nvidia build being demonstrated by a member of the software team ❤️.

At the other end of the spectrum is where Nvidia is positioned in traditional PCs and here you can see Nvidia penetrated at the highest level. While these are often positioned as gaming (or professional graphics) level PCs, they are also indicative of the demand for Nvidia’s APIs across devices. It should be clear that the Nvidia software layer is becoming more mission critical to the kinds of applications being developed today, and while simply running AI in the cloud and relying on a traditional browser might be enough, I am convinced more and more will migrate to a local device where it will be more economical, more private, and more responsive to end-users.

AI Broadly

If you take my assertion that AI is the platform technology and Nvidia is the key enabler and first software on that platform, the next area to observe is what is this technology being used for. Below will describe mobility, home security, health, and more. We are seeing literally everything get touched by AI. People will want to know or assert the killer application. It is clear chatbots and generative text, video, audio are wild new applications but that is just the start, and it isn’t even clear if those generative applications will be the anchor. AI will be an ingredient in all existing areas software solves problems and will also create whole new solutions breaking down existing barriers. This is just like the graphical interface or internet or cloud.

The key inflection point we are at today is debating whether AI brings new solutions or is simply an ingredient to be used by existing solutions (and importantly software companies). If you were around in the late 1990s this was exactly the debate that was had about the internet. Then in the 2000s that was the debate about cloud. Reality today tells us that those were ingredients and by and large the software and hardware in use today are new companies and categories, even if the large existing companies continued to dominate the markets they created by using the new technology as an ingredient.

Across the show floor, particularly in the startup and smaller country booths (also startups) there were thousands of companies essentially doing “AI for <X>” where X was as broad as you can image. AI to:

  • diagnose a disease, read scans or labs

  • review or summarize an industry document

  • secure web traffic, detect phishing and spam

  • translate, dub, transcribe, caption text or video

  • choose ingredients and cook food

  • determine nutrition of a meal

  • water and fertilize plants in a pot or in the largest of fields

  • monitor and manage pet behavior and health

  • select proper lighting and enable a vast range of lighting scenarios indoors and outdoors

  • profile car and truck tires for wear and safety

  • detect cheating on exams and in interviews

  • coach people at work after an interaction with a customer, partner, or collaborator

  • improve support documents for readability and utility of complex “manuals”

As I walked the floor at every corner I was blown away by the companies with specific AI enabled scenarios solving actual problems in what might seem like narrow domains requiring domain knowledge to even approach the work. A few readers might know, but the original software market included highly specific software for farming, medical offices, construction, and real estate management (check out an early RadioShack catalog or here). The sign that something is a new platform is seeing all the above reimagined or done for the first time with the new tools.

There are also many solutions using a frontier LLM platform directly. I saw a dozen devices that in one form or another recorded conversations or notes to myself then with a phone and app provided transcription, organization, and follow up. These even took the form of MagSafe connection to the phone. This one by Plaud is an example.

Amazon recently acquired Bee computing and was showing off their Bee Pioneer ambient computing band. I think we all expect this space to see a resurgence as the models have dramatically improved.

Bee.Computer (now from Amazon). I took a blurry photo so this from Bee.

Vocci expressed ambient computing in a ring form factor. The ring has a tiny button on it to start/stop recording and a companion app to provide transcripts, summaries, organization of notes. I am extremely uncomfortable with this sort of recording without notification.

Nirva is a company more focused on journaling and mental well-being with jewelry providing the microphone and recording. It aims to go beyond recording and summarizing as “it tracks your mood, maps your social relationships, and delivers truly useful, personal insights and guidance.”

One thing many new AI applications are confronting is the concern over privacy, particularly when using the frontier models. This is driving a wave of innovation where capture and processing are happening on devices or even as a private cloud. For me this continues to reinforce that what was fancily call “the edge” today is really about having a device that does the work locally will be cheaper and more private.

In that spirit there were quite a few “devices” that are essentially AI-computers, which are really building a new kind of PC that has the GPU-compute capable of running the AI software that matters. We saw the full devices on the Nvidia platform but there were others. One I thought was cool was Tiiny AI. Tiiny connects to your existing laptop and provides a local web interface to the compute engine capable or running an array of models locally. The software layers is essentially an AI workspace. The hardware in the small box provides a long-term memory of your work, connection to your local data, along with 80GB RAM and almost 200 TOPS (whatever that might mean!)

Tiiny is a companion device. At the higher end are mini-pc form factors like the Minisforum MS-02 which is a full PC form factor that can be stacked/daisy chained for even more capability. While a variety of these exist, they all run Linux and open source models on top of combinations of AMD, Intel, and Nvidia hardware. My view is these are a stop along the way to a more converged hardware standard, aka Nvidia.

I don’t see this activity for local compute abating even as the frontier models from hyperscalers continue to dramatically improve. One could point to the increase in “on-prem” data center efforts for either traditional workloads (private or secure cloud) or new AI workloads.

Finally, in the first year or two of chat and LLMs there was a view that many of the newest “applications” were simply “LLM wrappers”. This was a common claim at the early days of the graphical interface as well—new products look like most of the product is just surfacing intrinsic capabilities of the underlying platform, in that case windows, icons, menus, and so on. This is because there’s a perspective that the new capability from the underlying platform is the “hard part”. One look at Excel and we can see that those claims were never really true.

Still, CES had quite a few products that early on might appear to be easily subsumed by the model itself. Time will tell. For example, there was Loona a consumer robot that is essentially a friendly interface to a GPT model. It provides a speech and expression-based interface to GPT capabilities. It is a physical version of the prompt engine you interact with using speech and audio and a bit of vision.

Very clearly CES lived up to its marketing and innovators in AI showed up in all forms. The debate over feature v. product v. company will continue. I would assert the answer to that will not change the fact that all of these scenarios will indeed be solved with AI-enabled software.

Television and Display

Some years of TV are about form factors like flexible, curved, or 3D and those years seem to come and go. Other years are about thinness or bigness as the main theme. You walk about not so impressed and think “these people aren’t doing anything”. Then in really tragic years it is all about “smart” and TVs try to turn themselves into computers and most people get frustrated and wish they would just make great displays because the idea of signal has been completely separated from TV and 100% of tv buyers have a phone and/or a streaming box. All while we’re thinking this as consumers the number of displays in our lives compound exponentially (we carry and wear one, drive with several, and there are countless in the home and at work). Even so, the business is absolutely brutal and has leveled massive companies in the past – RCA then Toshiba then Sony – and has proven difficult for even the biggest Chinese companies to break into (Haire, ChongHong). Today Korea overwhelmingly dominates between Samsung and LG.

This year the two companies showed exactly what we all want to see which is innovation in picture and sound. The new displays are really incredible in color, thinness, size and soon price for early adopters. The buzzword this year was “RGB” with modified of Mini and Micro. In fact, if you take RGB, Mini, Micro, and LED you can mix and match and create any sort of new display tech you want. I was so confused. And the vendors all add modifiers like EVO (evolution) and Next Gen to confuse even more. Here’s the best I could do to explain for a world where we know LED and OLED and “O” is better:

Some of the permutations and combinations of LED, RGB, Mini, Micro, and organic include:

  • (Today’s Best) OLED self-emissive (no backlight) display where each pixel produces its own light and color, enabling perfect blacks. Can be made FLEXIBLE.

  • (Today’s mainstream) Mini LED: LCD that uses tiny white LEDs as a backlight, improving brightness and contrast through zone-based local dimming. Rigid.

  • (Latest) Micro LED: step up from OLED with self-emissive (no backlight) display made from microscopic inorganic RGB LEDs, OLED-like rendering with might higher contrast.

  • (Latest) Mini RGB: LCD approach that replaces white backlight LEDs with red, green, and blue mini-LEDs, allowing more precise color but still has an LCD layer.

  • (STAY TUNED) Micro RGB: self-emissive (no backlight) display where each pixel is made of microscopic RGB LEDs, making for super thin and near borderless panels with near-borderless.

The bottom line on displays is we’re in for a step function increase in blacker blacks, whiter whites, thinner bezels, and much thinner TVs.

Several different leds on a display

AI-generated content may be incorrect.

Are you still confused? Me too. I’m sure I messed that up. I swear if you wanted to confuse a staff person ask them to compare/contrast or even define which tech they were showing off.

Here’s some up close photos showing different kinds of pixels. I got in a lot of trouble taking these so please appreciate them:

A screenshot of a television screen

AI-generated content may be incorrect.
A screen on a wall

AI-generated content may be incorrect.

Below was the best TV of the show and will most probably cost as much as a Tesla should it make it to market. (Note: I learned after the show you can order the 115” 4K version for $30,000).

A large tv on display

AI-generated content may be incorrect.

How thin will the TVs get? Micro RGB is the ultra-bezel-less ultra-thin technology on the horizon but still must go through manufacturing ramp up so is a way off. Here’s a 9mm thin bezel-less display from LG. I couldn’t figure out how to visualize the thinness but here’s a smart IC next to it which is just under 8mm along with a shot of what that TV looks like on a wall. I got in trouble taking this photo.

A close up of a window

AI-generated content may be incorrect.
A large tv with a race car on it

AI-generated content may be incorrect.

It isn’t just picture. When outfitted as TVs (v. computer monitors) behind these displays are now a full wall of rear firing speakers designed to bounce the sound off the wall and simulate a full range of audio capabilities normally heard from a set of speakers. Samsung declared it “The Year of Audio” with a soundbar containing what they claimed eliminated the need for both surrounds and a subwoofer. But if you get their latest wireless speakers you can add up to 4 for a full range effect. Sounded incredible in the booth of course.

A tv on the wall

AI-generated content may be incorrect.

The TVs and resulting monitors are getting ever-increasing refresh rates as well. There’s a caveat with these in that they are starting to create synthetic framerates (the same way Nvidia does with the GPU) by computing extra frames. So, you get absurd claims like 1040hz which kind of sounds questionable to me.

The other thing going on is the size of TVs continues to rise as fast as the lift on American pick-up trucks. TCL has found its niche as the mainstream (read: Costco, Walmart) supplier of ginormous sets for relatively low prices. They routinely sell a 98” unit. These are monsters that you really can’t hang on a wall or put on the second floor. But boy they are fun.

A large screen with a person on it

AI-generated content may be incorrect.

The big negative is that each vendor is attempting to apply AI to the image being displayed. So, this is going to drive A/V enthusiasts nuts and provide endless hours of “disable” at family gatherings. Here’s HiSense (Chinese maker, usually highlighting their laser projects which are super cool but still seem niche) using AI for everything.

People looking at a display of televisions

AI-generated content may be incorrect.

Even with all this, there’s an effort to truly extend the viewing experience. While many of us see these with sports or Youtube TV, like everything the TV makers want to embed it into their platforms. This can make for some interesting experiments. HiSense was showing a 21:9 aspect ratio which allows for all the extra information while not scaling the main show, which you can see in the above photo on the left. This particular feature has made appearances a number of times but is usually blocked by the inability to build this into an ecosystem. YouTubeTV is doing this for sports by scaling the main event.

That might have been enough but there continues to be a push with 3D as well but no glasses or goggles. Here’s a Samsung commercial 3D display for signage, particularly retail. Obviously, the photo doesn’t capture the effect, but it is rather remarkable. This can be done live as well as they were showing it with people lined up to add themselves to a 3D display ad.

A screen shot of a perfume bottle

AI-generated content may be incorrect.

Bottom line is there’s a lot of innovation going on in display and TV and it is aimed in the direction I think most consumers care about which is better picture and sound. Given how much of our video content is now watched at home yet the era of “home theaters” (with $20,000 “RUNCO” projectors and a $50,000 rack and a remote that doesn’t work) is long gone replaced by a giant flat screen in an open-plan living/dining room with people sitting in random places with their phone in their face, I am hoping there’s a potential to do great setups with these broad consumer TVs so we don’t go back to building out rooms that cost more than the house itself. If we’re going to spend $30 on a first run movie in 4K HDR Dolby Atmos, we should have a fantastic audio/video experience approximating what the director had hoped.

PC and Tablets

I can’t help but seek out the personal computers even though they are a tiny fraction of the show. Except for Computex in Taipei there’s no real forum where PCs come together anymore, which is a weird feeling. CES sort of picked up PCs when the COMDEX show (think Halt and Catch Fire) failed to make it through 9/11 and the dot com crash.

In keeping with the show delivering on innovation, like TV, the PC world is delivering solid improvements in the hardware. Like TV these improvements will not redefine the category, create new use cases, or drive early replacement, but they will be better.

This is about Windows PCs. Apple is ubiquitous at the show but has no public presence. US people walking the show get a good reminder though of Android market share outside the US, especially in Asia, when seeing all the folding phones in use by attendees. Make no mistake, easily the majority of PCs in use in booths are Macintosh and don’t think I don’t notice this. It hurts.

The Windows PC faces two structural issues:

  • The spreading of effort over ARM v x86 today is unnecessary. The gains from ARM will not materialize and the edge compatibility cases keep it out of the enterprise. ARM was about a new software ecosystem and new hardware capabilities with a break from x86. Merging them was always going to cause these challenges and gain none of the benefits.

  • Windows doesn’t carry the APIs for new scenarios in AI and there’s an Nvidia ecosystem gaining momentum as a true replacement in a browser-based world. That isn’t an enterprise thing, but it is a developer thing and is easily, as Chromebooks demonstrate, a consumer thing.

A symbol of this second point is that the chip companies and some OEMs are offering AI accelerators for offloading compute to dedicated chips when running APIs not generally part of Windows. Note: TOPS is a meaningless number comparing across architectures and operating systems just like megahertz is.

Sorry for the photo orientation. This was a tricky booth.

Additionally, PC enthusiasts would also chime in and talk about the evolution of the Windows user interface itself but that is a bit like the TV software platforms (Tizen, WebOS, etc.) and not core to the issue. The taskbar, explorer, start menu, and so on are not going to alter the trajectory. Worrying about those is truly rearranging deck chairs.

That said, what is being delivered by Dell, Asus, Lenovo, HP, and others working with Intel and AMD are very nice PCs. While I certainly have offered the cliche “these are the best PCs ever” this year is particularly good. I believe they represent the right kind of innovation in the market given where PCs are. If you’re doing “PC things” then these are the best ever and you probably have no need to try to make them do things PCs are not so good at these days.

Dell relaunched the XPS with an absolutely rocking model. Real function keys. Thin. Light. Solid. Great display where Dell has been executing fabulously. It is premium and premium priced especially if heavily specified. The Dell XPS 14 has long been my PC recommendation and I’m glad to have that back. Dell also continued to do stellar work on monitors announcing two mind-blowing products: UltraSharp 52 Thunderbolt Hub Monitor, a 52-inch 6K display, and the UltraSharp 32 4K QD-OLED Monitor, focused on performance and color accuracy.

Dell XPS 14 in a photo from Dell since they did such a great job, they deserve a better photo than the one I captured.

Geekom produces a full line of performance and price favorable PCs. Their latest X14 Pro is only $999 (half of the above Dell) with an Intel 125H processor/ARC graphics, weighing only 999g.

Asus introduced a line of business-customer focused PCs, which means they are a bit more durable and overall premium and come loaded with Windows Pro. These were quite nice.

Samsung and LG both offer premium PCs as well. Here’s the new Samsung Galaxy 14.

At home I have always been a fan of All-In-Ones when you want a big screen for a stationary PC. The market is limited as laptops are now equivalent in capability and in fact the AIOs are just laptops in the base or behind the screen. Still this ASUS model was super nice. Lenovo introduced an AIO with a large dual screen format with almost a square resolution for creators. The screen is a dual letter sized stacked landscape. You either think this is an ideal size or it needs to be tilted 90° sideways. I think it is ideal for productivity like XL or PPT (aka sheets or slides).

All of the above are mainstream PCs that do a great job at PC things. I think that is where the market is and pushing PCs to be more creative in form factors really requires new software and the investment in new capabilities at the app layer just isn’t there compared to devices on the edge with companion mobile apps. That’s causing people to bemoan the lack of innovation in PC form factors.

That said, Lenovo showed off (again) a flexible screen laptop. People went nuts over this prototype. Asus had a more practical product in this area, the (updated) Asus Zenbook Duo. This is a two part tablet/laptop with a big folding screen that can be dual or full screen. The way they make this practical is that the keyboard magnetically attaches/detaches for sure and the screen has a stand. They don’t try to make it a tablet, just a dual screen laptop. If you’ve seen those USB-C powered folding screens in Instagram ads it is basically that sort of screen with a high quality laptop built-in. I like it because it focused on what laptops are good at without needing software updates that won’t happen.

The presence of tablets at the show is substantially less than previously. In the Apple world, tablet evolution is clearly one where the improvements in PC silicon and software as a result of silicon (power management, security, etc.) have made their laptop value proposition a superset of the iPad even as iPad run rates continue to be enormous. In the non-Apple world, tablets seem to be used as utility screens for point of sale, signage, and in other places where touch and durability are key.

The biggest presence for a tablet on the floor was the Samsung Galaxy Z Trifold. This is a case of the industry continuing to pursue folding screens. I swear the industry is going to will this form factor into our hands one way or another. Around the show you casually see way more folding devices than you would in any crowded space in the US in general. The presence of much of the Korea GDP in attendance might have something to do with this!

The Z Trifold is no doubt an incredible device. Geeks love folding convertible devices and always have (Surface!), and the trifold has it all. It’s a phone. It’s a bigger phone. And it is a tablet. Just keep unfolding it. Samsung has done a bunch of work to enable the Android platform to somewhat gracefully handle the screen dynamics including a model that basically runs three apps side by side. My general issues with double or triple folding are overall futz factor and the size. As with a PC versus iPad having windows can be helpful but more often than not you have to devote cognitive energy to arranging and managing windows that just isn’t that beneficial. For me the bigger problem is I can’t figure out how to effectively or efficiently type. I generally fold the device to type. I love the idea of having a big screen for video but maybe I just don’t watch enough long form video on a phone. For productivity I struggle with the bigger screen. Needless to say, right now in the tech scene I’m the minority and Reddit went crazy over this device.

The mini-PC form factor was everywhere and not just for AI PCs. In many ways these are the modern PC desktop outside of gamers (which I’m not going to cover in this trip report though there was a big and significant presence). The benefit of the mini form factor is for ports and ergonomics in a home or office environment especially with multiple monitors. The low power consumption and mostly laptop-level fan noise are great. Enterprises love these for ease of transport and because the useful life of a display is even longer than a PC. The price is great too. Below is a Geekom updated one. I’d get one like this.

Home, Automation, Security

Devices for the home represent the heart of CES after television and audio. If you’re not a professional installer with dedicated shows (appliances, security, custom homes), CES offers the best chance to see the evolution of this broad range of devices but also the complexity being pushed to the home.

The continuing challenge with the home remains integration of devices classes (power, switching, plumbing, lighting, locks/doors, cameras, major appliances, and safety/disaster). There’s a fork in the road with all devices. Do you go with major existing brands and wait for their own “smart” platform which almost never integrates (for example, garage doors)? Do you choose to make a device smart by going with a brand-new vendor in an area that often lacks a long-term track record for maintenance, parts, repairs and is often disconnected from all legacy approaches (lighting is the best example). Or do you choose a class of smart add-on capabilities that integrate with legacy infrastructure (like a Nest thermostat).

I’m basically a fan of the latter. I literally can’t stand the wholesale “new” stuff for a home because the last thing I want is 10 years from now needing replacement parts that don’t exist from a company that doesn’t exist with preconditions I can’t replace. I currently have that problem with Lutron (a MAJOR manufacturer) light switches which I scour eBay for needed replacement parts. In the past I have covered the non-integrated smart platforms but that’s another area where I’ve been burned as a consumer (aforementioned garage door, Chamberlain, which cut off HomeKit support). I have no patience (literally) for this type of home futzing and no tolerance for a poorly integrated home experience which I define as “an app for every system or device”.

In this context, CES made substantial progress this year and that is where I’ll focus. The Matter standard—championed by Apple HomeKit—has seen fits and starts but is really coming together. Even if you don’t use HomeKit, the presence of Matter has elevated platform support and integration across devices. Plus, it has made setup much easier. The main integration points are Google Home, Apple HomeKit, Samsung SmartThings, and Amazon. Across those there are many common devices from third parties but first party devices when they exist often don’t hop ecosystems.

In the following you can see how both SmartThings and Amazon have continued to expand partnerships and first party devices. The partners across all the platforms are mostly the same which is great for everyone.

Samsung SmartThings home security.
Amazon home security first party devices.

Looking above, Amazon added several first party sensors announced at CES including an OBD2 car alarm, air quality monitor, and sump pump monitor. Amazon has done a great job integrating with legacy sensors especially safety owing to their work with professional installers. For example, they have a Ring Alarm add-on that can replace a wired alarm and use those door sensors. They have a smoke alarm sensor that listens for the alarm and reports that in your Ring App/Alexa as a fire alarm.

There were many new models across door locks, thermostats, switches/plugs, and more. Many companies offer complete product lines of these that integrate across Matter and older standards. Buying these on Amazon itself is easy as well. The only caveat is you might get cut off from firmware updates if the company vanishes and some might see the security risks of these devices as noteworthy. I do.

Security cameras and systems are like plumbing, HVAC, and electrical in the sense that few of them integrate effectively into a single home app from the platforms. Even Amazon which offers Ring cameras and home control (and Wi-Fi) still show many integration seams from acquisitions. HomeKit has limited camera support so far. That said, home cameras are becoming a requirement and the various world events that took place during CES show how those security cameras are a vital part of home ownership.

I have been a Ring fan since the earliest days and find the experience great. That the doorbell is part of an incredible array of cameras and the alarm system is fantastic. Amazon does sometimes get a bit complex as to whether they are a direct to consumer or professional channel-supported brand and the products often teeter. The new 4K home cameras are great but have too much of a low-end consumer industrial design for a device that should be a bit more robust, professional, and stealthier. At the extreme they added some big outdoor cameras that will look great on a loading dock even though you’d probably love one for the whole back yard.

At the extreme Amazon introduced a job site security system on a trailer! This seems to go with the job site Eero introduced at a previous CES. I kind of wonder if the primary customer for these might be Amazon itself? Below is a trailer with a solar panel, battery, and giant outdoor 360° camera.

While there are some standards that connect third-party cameras, you still have to buy into a hub and with that an app and potentially third-party monitoring. All of the cameras at the show are now doing some sort of AI image recognition, owing to the fact that most come from China where this is a primary scenario. This year all the new cameras are 4K. What I found most interesting is the addition of integrated solar panels (versus the add on panels of past years) and the addition of secondary cameras for more field of view or even infrared.

Finally some companies are adding 4G/5G connectivity thus making the cameras completely wireless.

Cameras are becoming AI “edge” devices. In fact many commercial systems are themselves either incorporating edge style AI computers or equipping legacy cameras with an edge style endpoint providing computation and connectivity. This system below from Sixfab is using the equipment shown to process the many cameras on the screen to do scene and person identification.

What is most interested with respect to cameras, however, is how security is evolving to NOT use cameras. A number of approaches were on the floor using mmWave RADAR or even Wi-Fi signals to monitor a home or public space so as to preserve privacy while providing real time monitoring.

A kind of scenario where this matters is in the home where you don’t want to constantly video monitor the living room or a bedroom but want to know presence, breakin, someone falling, kids or pets fighting, and so on. These systems put a small device on a table or shelf that connects to an app. They use AI to transform the sensor signal into alerts. They can monitor sleep (or sleep walking/waking), nursery, changing in lighting, entry/exit a room, falls, or in a public space even crime or unwarranted contacts.

Restroomguard is one such product that provides security for public restrooms, clinics, or dressing rooms, and more. The range of sensors is impressive.

For the independent-living aged population, a number of solutions were shown. One of them—Silvershield—even uses the phone’s camera to map out the room and provide even more accurate representations of furniture to identify potential falls, bumps, or obstacles.

Acoustic Eye uses cameras and microphones to provide aerial coverage of a space in order to notify the presence of drones. The device attaches to a window. Cool!

Old-fashioned home appliances remain targets of the onslaught of AI. There’s been a desire to connect legacy style appliances to the internet and now to use AI. I found the internet connectivity for the kitchen and laundry room to be worse than a bust, but an actual negative. My feelings have not stopped the ongoing attempt with AI.

These just aren’t better. Not having switches is a bug not a feature.

What bothers me most about the connected/AI appliances is that the claims are about cleaner, faster, easier, but that doesn’t seem to be the case in practice (for the few I have). But they are more expensive and they seem generally less well made and certainly will be obsolete sooner.

Using AI is great of course but also can be dubious. This washing machine uses AI to better understand the floor and adapt the W/D to the floor surface. But the floor surface doesn’t change underneath. I’m confused.

Does AI really need to be used to identify the clothes?

One category that seems to be getting better even without AI and connectivity are vacuum cleaners. The combination of improvements in understanding the physics of suction (!) and battery/motor technology is truly improving things. I do really like the new vacuums that have floor-based charge stations. The upright stick models that used a wall mount charger bugged me because I don’t like mounting things in the wall if it can be avoided. Even though the demos were just sucking up inordinate amounts of Cheerios, the demos were compelling.

Finally, home appliances offered a great chance to see how a company can just “appear” to the US market but has been on a long journey. Dreame is a Chinese maker of vacuums that started a decade ago. This year they had an enormous booth and a product line covering everything: vacuums, TV, Smart home, small appliances, Air conditioning and quality, personal care, laundry, and refrigerators. They had a catalog the size of a paperback novel. Totally wild. Their booth represented a product line like Panasonic or Samsung and was at that scale.

Health

The biggest thing on display at CES has always been the divide between consumer health and wellness and FDA-approved medical care. Every booth is a discussion of either how a device is useful and valid even though it isn’t (and doesn’t need to be) approved by the FDA, OR it is a discussion of how it is in the process of seeking FDA approval. This year was no different. The show tends towards the non-FDA world which is why when I see something approved it is worth a good look.

The wellness trends have been huge, but are really in full force now. There were countless bands, rings, watches that measure body telemetry non-invasively and the process that telemetry to provide some form of wellness scores. It is impossible to provide any insights on these products without trying them. This is one where making a platform bet is a key decision. For many the form factor will drive the initial choice, a two-way phone-integrated watch, a discrete ring, or a sensor band are the choices today. There wasn’t much new in this area other than more of all of those.

Many in the outdoor sports world are fans of Garmin. They introduced a new Forerunner GPS watch. It had a lot of new features. One of the cool ones was a red/white flashing light that serves as a safety measure when running. It has a long duration battery due it its huge size.

Also, for athletes (and more perhaps) is a new sensor that measures lactate while exercising. The noninvasive sticker sensor goes on an arm or wrist and measures the content of sweat. The first one measures lactate for endurance and mitochondrial health and primarily aimed at lifestyle improvements. The vision is to add biomarkers to measure cortisol, creatinine, and glucose for comprehensive health insights.

Sleep is by far the category with broadest coverage and also converging with AI. Every one of the above devices also acts as a sleep monitor. There were also countless mattresses and toppers that provide sleep measurement. I know for me waking up to two different sleep scores plus how you really feel is probably more stress than anyone needs first thing.

The most basic body telemetry is weight. Withings has had some of the earliest connected scales (along with other connected health devices). The company introduced a new Body Scan 2 scale. This is a scale with an extendable impedance grip device. The scale measures a whole host of biomarkers and does a longevity prediction that is more detailed and precise than the previous generation. It said “muscular thin”, so I was sold. The device connects over Wi-Fi and integrates data with Apple Health. They are definitely pushing a monthly/yearly plan and much of the analysis slides into paid-only so be on the lookout before buying.

Quite a few devices especially those from other countries are designed to be deployed by and used with a primary care doctor. In many countries these startups are funded/supported by the national health services. One cool example I saw was a Korean company, exoFit, that designed a reusable stick-on patch that measures body composition, EMG muscle activation, gait, motion analysis, and more. It takes the collected data and provides “AI-powered” analysis for training and rehabilitation using the accompanying dashboard. The software is detailed and technical designed to work with a specialist. Thus, it includes the full occupational therapy/rehab plan and compliance.

Many devices on display are just designed to be better at what we have always measured, but better because of form factor and connectivity. Another Korean device I liked was the Thermosafer XST600 (!) which is a core body temperature sticker that provides a connected and continuous measurement. There’s a little sticker you place on the body forehead) and a pair it with a gateway and monitor which can then display the temperature on an app or a multi-patient console (or feed the data to an EMR system). The motivator for the device was covid and viral infections. I love these simple improvements.

Photo from the brochure provided in the booth.

I loved this finger blood pressure monitor. Interestingly they created two models one with Wi-Fi and one with cellular. It is deployed via hospitals/doctors for monitoring where connectivity is important and with many patients, Wi-Fi isn’t an option at home.

Like all monitors the pressure counts up and down when measuring, so this is not the final measure displayed. Not my finger.

One badly needed bit of body telemetry (at least until the world is on GLP1s) is continuous glucose monitoring. The current minimally invasive patches are widely used (as I see in yoga class) but not particularly convenient and costly. For a very long time there’s been a search for a non-invasive measure. Many believe that AI will unlock some telemetry that will prove correlated with blood glucose. Every year there are a handful of non-US companies that claim to have a solution that never breaks through the FDA. There were several this year (and one I see on Instagram ads all the time). I saw a ring and a fingertip system this year. I will wager these won’t make it.

PreEvnt is a US company with a new implementation of an old bit of telemetry. It has long been known that your exhaled breath changes with blood glucose level. The correlation to time and level has been difficult to model as well as the specific elements. Isaac is a small “breathalyzer” form factor that measures volatile organics/acetone and couples it with an AI model to deduce glucose levels.

After low back, foot problems are one of the most frequently reported conditions and often go undiagnosed and untreated. The sensors have been around for a while but getting duration measures and correlating with conditions has remained unsolved. There have been shoes and insoles in the past. This year the Orphe insole combines those with a good deal of software in an effort to diagnose and treat a broad range of motion issues as well as falls. The insole has six pressure sensors that provide motion sensor-based gait analysis and software that can help directly or couple with third-party software for more analysis.

Not all of the products were devices. One product I was fascinated by was XRaedo from a Korean company that has a number of health related XR-based (extended reality) products. XRaedo creates a full motion A/V avatar of a loved one that has passed. The platform enables interaction and supervised grief counseling. At first this seems kind of creepy but as we see generative A/V arise from small samples and combining that with LLMs and training corpora based on real life first party materials, it does seem that recreating loved ones (or historical figures) is going to be a feature we will all have easy and low cost access to.

One of my show favorites was Triage 360° from Omedus, a US company based in Nebraska and founded by a triage nurse who worked through the pandemic, a physician, and a former Air Force officer. The product provides a high-tech addition to first responders of mass casualty events. Today triage is a color-coded arm band and a sharpie with a lot of verbal communication and physical separation (watch the tv show “The Pitt” for a good demonstration of this). Triage 360° is a crash case with a bunch of reusable modules that attach to the chest. These modules have the same color codes but also provide basic and needed telemetry, wirelessly sending this back to a console that can be monitored and adjust triage. It is not hard to see this product in stored in any location where a mass casualty event might unfortunately arise.

Robots

I came to the show certain I was going to be overwhelmed by robots. In a sense that was true. There were a lot of robots. The only problem was most of them weren’t working. My view there were three types of robots to be seen:

Articulating arms and hands. While I won’t call this a solved problem, they have been used in manufacturing for decades. What is new is setting them up, training them, and how they function in the face of ambiguity. AI has dramatically improved these and it is easy to see we’re going to have a lot more sorting, picking, welding, attaching, and so on to drive “lines” of all sorts. Below is a set of arms that pick off a shelf at a robot-staffed modern Automat.

Roomba style autonomous machines. I was (again) blown away by the number of lawn and pool robots. I spent my youth mowing the lawn and cleaning the pool so can appreciate these, but I had no idea cheap teenage labor was in such short supply :-) Lawn robots are becoming larger and more tank-like for sure. These scenario specific robots cover everything from aforementioned lawn and pool to other scenarios such as warehouse delivery, construction site material movement, commercial office cleaning, and warehouse stock movement, to just name a few. Again, these are vastly improved and independent with the new layer of AI software and sensors driving them. There are going to be many many many more of these. The unifying theme of these are the docking stations so I suspect new offices and factory floors will be developed with dedicated docking space, something you see on luxury custom home TikTok for Roombas in kitchens today. Below are a few examples of this style.

Humanoid (and four-legged) autonomous machines. For many the progress as evidenced by CES seemed slow. The demos were a chorus of failures for sure and reminded me of so many speech recognition demonstrations of the past. There’s a feeling these are always a few years out. There’s no denying the progress and given the pace of autonomous driving, it is fair to see the set of problems these devices face will dramatically improve. Will they get broadly deployed in a few years? To say no, requires one to bet against Elon Musk which I am definitely not willing to do.

Here are a couple of humanoids. The first picks items off a shelf. The second is one of the defensive Boston Dynamic “dogs” but it wasn’t quite lively at the show. The third are just some robots standing there suspending by humanoid robot racks.

By far the EVW-1 was my favorite robot demo. It is an assist device so robots of the Roomba class can make their way up and down elevators. The robot somehow summons an elevator (which was a step they said could happen with a Wi-Fi command) and then via another command select the floor. This robot then slides up and down and pushes the button. I wasn’t sure why the same Wi-Fi API couldn’t press the button, but I digress. Presumably a full humanoid robot does not need this help. So, this is only for when R2D2 is alone without C3PO.

This one was pretty cool. And really does show that ample opportunity to turn jobs that are difficult at best into jobs that manage, maintain, repair, and keep supplied the robots.

I refuse to be cynical about robots both in the “always 5 years out” view and in the “they are net negative for jobs”. I think the demand for robots is infinite and the more we use, the more jobs get created, just as we saw with transportation as autos created whole new classes of jobs for humans above those in horses (and many more jobs).

Mobility

As mentioned way up at the top, a big chunk of exhibits from past years that were not at the show this year pertained to mobility, particularly autos. I welcome that reduction even though mobility is deeply connected to all the technology work I love and follow.

The show featured a small amount of micro mobility—scooters and ebikes and other assorted ways of moving around in an urban or warehouse setting. There was also a collection of full exoskeleton suits that seems to be increasing in display for both individual and commercial use. While more health related, there were a number of joint-specific exoskeletons such as for knee or arm.

This is a Segway “for off road use only” motorcycle that we will almost certainly see on city and suburban roads. Grrr.

For me there’s one strategy that really matters for mobility and that is autonomy. Like PCs, I believe human-driven vehicles (particularly but not only gas combustion) have reached their saturation point. While obviously ownership will increase, the technology has plateaued and the growth scenarios for vehicles are in autonomy. I believe like mobile phones, it is likely that parts of the world that have low vehicle ownership today will leapfrog human-driven cars once the road infrastructure and economics are in place.

The national companies building GCE all seem to have either retreated or struggle with EV transition. It seems the case is clear that the model of in-house integration and manufacturing of broadly outsourced design and engineering has made it nearly impossible for existing automakers to transition to EV. China is clearly leapfrogging the gas engine with new companies like BYD. Still the existing GWA known for heavy equipment made a big showing of its line of consumer-oriented EVs. They are a distant number two but making a run for it.

The San Jose-based company Tensor showed off its “designed from the start for autonomy” passenger car. Tensor, like much of the whole auto industry, make use of Nvidia compute on board. The several not-Tesla, EV-first makers have also been struggling in the US. This is very tough.

While I laud the efforts at innovation, I don’t think it is necessary that we keep reinventing driver interiors and dashboards for manual driving. It is costly, has uncertain benefit, and right now the safety record is dubious. There were far fewer customizable dashboard components at the show.

Along with Zoox providing transport around town there was Waymo. Waymo had a giant booth. A lot of people in the booth don’t live where Waymo is offering service.

There were assertions of all sorts of autonomous vehicles for commercial use from Bobcat earth movers to heavy trucks. My favorite are the small private property/warehouse people and supply shuttling vehicles. I fully expect to see these shutting around suburbia soon enough.

I loved this “All-in-one" retrofit autonomy kit” specifically designed for military vehicles to provide field deployment. You add this to the tank and mount the various sensors depending on the level of autonomy desired (scout, ranger, commander).

Photo from brochure.

Lifestyle and Accessories

My favorite section is the most personal. This is about things that we all use and make life a little more fun or better. No long essays needed here, just some cool stuff with comments in the captions.

I want to start with a show favorite. This is a super tracker. It works with both Android and iOS and seamlessly taps into those phone-based tracking tools. It also features a QR code which can be scanned (so no find my needed) and connected directly with the owner. To solve the “how to attach” problem it has a built-in connector which easily works with a bicycle tube, handle, shoelace, or strap. The last feature is it uses the cell network to provide positioning by receiving and triangulating. I was not sure how it transmits this or makes use of it. I will let you know as I ordered a pair from Amazon.

This AirTag is built into the bell on a bicycle or scooter! Reflying is an ODM and always has a lot of creative location solutions.

Every year there is some new charger or cable that you just see everywhere. It used to be that to learn about this you had to go to Japan where they appeared first at retail. Then they started showing up at CES looking to find distributors. Now most of these are on Amazon before the show. That was the case this year with both the “saw these everywhere” chargers. First there was the retractable cable charger that packages a 40-65watt charger with a short cable and often an additional plug or two. The deluxe models of this feature a tractable MagSafe stand. The problem is they don’t have the output to power all the ports based on my experience. The other device is a flat box with a mains cord and several mains receptacles along with some USB-A/C ports.

There is clearly a race to build the charger with the most output. The highest rating I found was a ridiculous 370 watts. Ironically this was branded by Monster who used to have a booth selling magical analog cables with silver and nitrogen in the wire cases.

Aukey (not to be confused with Anker) offers a nice view of how these accessory companies all offer a FULL range of the same accessories.

These cables/chargers combined a SIM slot for the purpose of acting as a cellular hotspot. Pretty wild. Feels like a spy device.

From the same company, this is a physical SIM card that itself supports a dozen eSIMs. You put this physical SIM in your (Android or non-US iPhone phone) and then you can load it up with eSIMs. I think the app for loading only works on Android.

Moving on to connectivity. There’s been a rise in a micro category of mobile hotspots for use off of public Wi-Fi or wired networks. This provides a level of security or often VPN access as well. I travel with one though I couldn’t get it to work at the hotel this year! They like many things go from cool idea to wowza that’s a lot, probably too much. This one is HUGE and is more like a stationary router at this point. The big addition is a screen for doing all the management on device.

ASUS showed the first live demonstration of Wi-Fi 8! (I am still not on 7). The booth showed it to be faster. The stated goal is to be better for a lot of low-bandwidth consuming devices. In the photo the thing with all the antennas is the device and the hexagon-ish model is the proposed final form factor. Also, Asus it is Wi-Fi not WiFi.

At the other end of connectivity, there were several demonstrations of LoRa and Wi-Fi HaLow™, which share a portion of the megahertz spectrum in the US. HaLow is Wi-Fi 802.11ah for low bandwidth IoT devices that benefit from very long-range connectivity. This is what could be used in a giant open field or very large warehouse/factory floor. The devices look like Wi-Fi chips but they benefit from a bigger base station antenna. Below I included a bandwidth explainer for fun :-)

The product Rockland LowMesh is a LoRa radio that has a transceiver that pairs with a phone (that does not need cell/Wi-Fi for use, just BTLE) and allows a private mesh network over long-range (longer than family radio). It also has a base station to even further extend the range. They sell a whole suite of connectivity tools for the platform.

Brochure photo.

MileFlask by ShowMo, a security company based in the US, has a HaLow-based camera solution to provide security/surveillance for a multi-mile area.

Active slide
Brochure photo.

Handy bandwidth/frequency explainer:

I don’t like using my phone on a plane because there’s nowhere to put it that isn’t too far from me. I loved AVP but that was a schlep. These headphones which can be plain audio, feature a cool “screen” integrated that you can stream from your phone. Pretty wild and very nice ID. Not available yet but can’t wait to try out on a trip.

These earbuds were wild. They went beyond noise cancelling and were basically voice isolation. They would play a bunch of loud music like a conference, but you were able to carry on a normal conversation.

HDMI introduced HDMI Ultra96 which is 96Gbps. The key is HDMI is now like USB-C in that you can’t tell if a cable will work because the connectors are the same but the wires inside are different. So kudos to the EU for this victory.

We’re addicted to our Gardyn (so are the kittens). I love these home planters using AI, lights, and water. This one lets you use your own soil and doesn’t require pods. But still has all the AI monitoring you’d expect.

This planter uses prepackaged themes/schemes and uses all the fancy tools to help you keep them alive. It looks very nice in a room too.

Speaking of cats, this is a body composition scale for pets. I can tell your firsthand our cat would not be happy with this as he is just living his best life.

And Finally…

Just a few things that made me smile or something…

ChocoPrint is a 3D printer vending machine that makes customized chocolate treats. I need to order this for our apartment. HT to Aaron Bregel who messaged me this.
This was some sort of AI assistant. I had no idea what to say to the person in the booth, so I just smiled and said Looks like you’re trying to get Clippy out of retirement. Note the mismatch of Clippy and the taskbar. Yeah, I did.

This was kind of…creepy.

So was this.

The claim has not been verified.
This section of the show is always my favorite. Whenever you start to think it can’t be that difficult to manufacture consumer electronics just remember there’s a whole part of China where dozens of companies make plugs and cables as a competitive advantage who depend on other countries to be inventing new things that need new plugs and cables.

If you made it this far then you’re probably as exhausted as I was on Sunday afternoon hitting publish. What I’d ask you to do is just take a deep breath and think about the millions of people directly employed by everything in those 4100 booths. The thousands of people that worked 7x24 the week before building the city and then tearing it down until mid-week this week. All the work they put in is mind blowing. And think of the 50,000 or more people who spent hours in booth duty or in meeting rooms buying, selling, explaining, partnering, and more. There is nothing that compares to the learning from booth duty. I want to thank all the people I bugged with questions or who without skipping a beat ran through the script for the 100th time. Thank you!

See you next year!

Notes. I visited the whole floor but still didn’t see everything so I know cool stuff is missing, especially if it wasn’t on display. I only attended a few of the private/press events. Nothing here is an endorsement. No products or links are sponsored in any way including free samples. I took all the photos unless indicated. I am sure I made mistakes or missed something and take full responsibility. I estimate there are a dozen typos and garbled sentences in here and those are my fault and I appreciate the reports. You try typing this up after walking 40 miles ;-) No LLM was used in the production of this report. No market or financial advice is intended in anyway. All views are my own, 100%.

]]>
<![CDATA[236. Somaliland in the News. A look back at this innovative country.]]>https://hardcoresoftware.learningbyshipping.com/p/236-somaliland-in-the-news-a-lookhttps://hardcoresoftware.learningbyshipping.com/p/236-somaliland-in-the-news-a-lookSat, 27 Dec 2025 02:00:21 GMTThis post was originally published for the Windows team (and extended audience) on my Microsoft SharePoint site and shared with key people at the Gates Foundation. It is reproduced as originally posted. The country has made enormous progress in 15 years as evidenced by this comparison from X member and candidate for Nassau County Legislature (where I spent the first decade of my life!), @HillWithView, on the day Israel formally recognized Somaliand just as the US, UAE, and others appear poised to do so. I left the links in place but many no longer work. Please keep in mind this was written in 2011.

11/30/2011 BUILDING A FINANCIAL PLATFORM FROM SCRATCH – LEARNING FROM SOMALILAND

In a previous post we talked about the opportunity to build a platform, from scratch, based on building cobblestone roads in Ethiopia. There are some similarities to the work we do in terms of building a self-reinforcing system that provides an opportunity for creative problem solving and overall economic expansion. This post builds on this by describing a different example of a platform, but one that becomes necessary as soon as you have roads to travel on—roads that make it possible to move more goods and services around—and that is a platform for money. In our context we call this banking, but what do you do if you don’t have banking? I was lucky enough to have a chance to learn about this system and meet with some of the leaders of the service team—this post is about the mobile money platform that brought banking to Somaliland.

Subscribe now

By way of background, Somaliland is an area located on the eastern horn of Africa, just north of Ethiopia. It is about the size of Arkansas. Currently, the UN sees it as part of Somalia, though it seeks independence from Somalia and operates essentially as an autonomous region. For those of us that have seen the movie Blackhawk Down or heard about pirates in the Gulf, Somaliland is not where all of that takes place—rather it is about 4 million people living primarily off the land through livestock (a population of over 24 million heads of cattle, cows, goats, and sheep!). The capital city of Hargeisa is accessible over land by crossing the border from Ethiopia (over that unpaved 28km road from the previous post). There is a budding tourism industry and with that a collective desire to maintain a safe environment (we were tourists #20 and #21 for 2011).

clip_image001
clip_image003

A few statistics on Somaliland [in 2011]:

  • At least 73% of the population lives in poverty

  • Per-capita GDP is US$226

  • Almost 80% of the population have no access to healthcare and there are just 61 doctors and 222 nurses in the whole country

  • 1 out of 4 children die before the age of 5

  • Average life expectancy is 47

You enter Somaliland by land crossing from Ethiopia, unless you chose to arrive on a United Nations flight. From the border crossing pictured below you take the 28km journey (this is a major trade route, by the way) to the capital city after checking in at the customs station (this is adjacent to customs since photos of the office itself would be uncool):

Border crossing between Somaliland and Ethiopia
Next to customs for Somaliland

The national economy is driven almost completely by livestock export to Arabia and Europe, but local economies are dependent on people selling their own goods at local markets, and there is plenty of room to develop. As you might expect, most of this trade is a cash business. While there is an official Somaliland currency (Somaliland Shilling, 1 USD = 1,600 SOS, but this varies quite a bit), much of the exchange of value takes place in dollars. In fact, those entering the country at international entry points are required to purchase SOS in USD in order to create a flow in of stable dollars. It takes a lot of SOS bills to buy stuff, even if prices are in line with developing world prices (the finest lunch of meat and bread costs about 5 USD, 8000 SOS). While photography in a primarily Muslim country is generally not permitted, occasionally some photos are allowed. Below is what an exchange station looks like in a local market. Each brick is about 10 USD.

Money marketplace - note bricks of cash

Carrying around this much money is challenging to problematic (think of wheelbarrows of money). The situation would be ripe for banking and ATM machines, which is what we do. But, the problem is that there is no banking system—none at all available to people and businesses. The only bank in the country is the Bank of Somaliland and it has only a few offices (one in each city) with the primary function of managing government transactions and services such as printing currency and international exchange. In other words, there are no ATMs, no checks, no bank accounts, and certainly no loans. In fact, to pay any bills you have to present yourself at a city office such as the power or water company. To pay taxes you have to go wait on line at a government office, which wastes at least a day (or more) every month or so (so you can imagine the compliance rates). What to do?

While you could imagine just building a banking system like much of the world has, starting one of those from scratch is quite challenging when there is a lack of infrastructure. More importantly, much like building roads from scratch, one might ask what a bank would look like in the context of a country that did not evolve banking over a couple of centuries. Maybe you would build a different platform. There’s a unique opportunity to build a bank when you have entirely different constraints. While this sounds like a remote and singular geographic opportunity, it actually isn’t. There are a handful of poor and politically volatile economies in Africa where this type of platform could take root.

Imagine a bank that didn’t charge any fees. Imagine if you could only put money in and take it out as cash. Imagine if the only place you could do that were branches of the same bank you deposited money into. Imagine that you didn’t earn any interest on your deposits either. But in exchange, you didn’t have to carry around all your cash and could far more easily pay bills, easily get cash, and easily deposit cash. As I thought about such a solution, my first ATM card that replaced by passbook savings account seemed rather similar (my “Super SAM” card from Barnett Bank). But how could you start such a “bank” or build such a “money platform”?

When we talked about building roads, there were two key elements of a platform technology:

  • It is a technology that immediately solves problems faced by a significant number of constituents

  • It is a technology that is part of a self-reinforcing cycle or ecosystem that gets stronger the more people use it for what it is intended

There is a third element to a platform technology:

  • It is a technology that solves problems in the context of constituents in a straight-forward manner; and if it requires technologies to be built then those technologies are themselves readily available as a bootstrap.

This third point should also be familiar to us in Windows if you go back to the start of Windows. While it would have been possible and interesting to release Windows into a world that required everyone to buy a new type of hardware and no longer ran the thousands and thousands of existing MS-DOS programs, Windows was a platform technology that was built on an existing ecosystem. In using MS-DOS as a substrate, Windows was able to take advantage of the tens of millions of PCs that were out there and it also continued to run existing programs. While at the same time it met the first criteria of a platform by solving problems people had—things like using laser printers (or as a developer, eliminating the need to write printer drivers yourself) or moving data between programs (clipboard) to name a few. New technology platforms can of course, be built, but that is never a straight-forward path and history tends to forget all the failed attempts.

In a place where there aren’t a lot of big buildings, roads, water, or power, there’s not a lot of traditional infrastructure upon which to build banks. There are, however, mobile phones. In fact, mobile phones are everywhere. Penetration of mobile phones in Somaliland easily rates them as ubiquitous. While worthy of a separate post, imagine how much time (and time is the only asset most people have) is saved by phones even amongst people working in open air markets. A simple example is whether or not to slaughter another animal to sell—without refrigeration you only have a few hours to capitalize on the meat so you’d better be right. A simple call to a contact in the market can help you understand demand and make the right call.

Since all transactions have been happening by cash, and everyone has a mobile phone, a “banking system” platform needs to be built to take that into account. Additional context would be that most people (and businesses) do not have official forms of identity, equivalent of social security cards, street addresses, or other means to establish their identity. They do, however, have a mobile phone number and for most people that is far more important than any piece of paper and is as much an “identity” as a driver’s license or passport.

Africa (Kenya is where this started) is already the home to a major, and innovative, cash based mobile banking system called M-Pesa and much has been written about it. It is a sophisticated system developed by major international mobile operators, led by Vodafone. M-Pesa provides for peer to peer transfer of money and the withdrawal and deposit of cash, essentially funded by mobile minutes on accunt. By virtue of the origin of the service and the resources devoted to developing it, M-Pesa is sophisticated technically and in terms of the features. As it has grown it also has far reaching business deals (ability to pay, deposit, and transfer money) with many national and international organizations. M-Pesa also represents a substantial portion of profit for Safaricom, the mobile operator that introduced the system.

In Somaliland, there are two major mobile operators, TeleSom and SomTel. There is no telecom regulatory body, but the companies work with the government and agree on things like rates. Rates are very cheap and comparable to US per-minute rates, which are much cheaper than many other locations. Relative to banking there is an overburdened and undercapacitated central banking authority (the director’s salary is $500 USD per year)

Seeing the work by M-Pesa and the need amongst its own subscribers, TeleSom decided to develop a a mobile payment system. As you can imagine, the primary problem they set out to first solve was to make it easier for their existing customers (hundreds of thousands) to just buy more minutes/texts. You can imagine that a big thinking person (or a vendor or consultant from the developed world) might develop an “app” or use at least a WAP mobile web site, or just a PC web site. All of this would fail the test of working within the context of existing customers. This is a market that for the most part is using recycled handsets from around the world—there’s a large presence of used phones for sale that don’t even have color screens. And of course, there’s no credit so all usage is pay per use. PC usage is very high but not readily accessible. To add a more functional challenge, transaction fees can’t be part of the equation.

Enter Zaad. Zaad, was introduced to TeleSom subscribers in 2009. The first goal was to increase usage and sell more minutes and to do so more efficiently. The system is a home grown platform and was built locally from the ground up. It currently has over 250,000 active users on a given month and on average a person is doing $30 of transactions per week and 50 transactions per month in an economy where the GDP per capita is less than $1.25 per day. The service will probably have 100% penetration among their customers. Let’s look at the service.

Update: As of writing this on Dec 26, 2025 this is Google Gemini update on the status of Zaad. Dang that’s amazing!

At the technical level, the service is built on the GSM standard USSD. It is sort of the orphan of the GSM standard like UDP is to TCP/IP. The neat thing about this is it allows for a simpler menu driven experience that looks sort of like a WAP site did in 1996 that works on the most basic feature phones—no browser required. Some implementations of M-Pesa uses SMS and actually has a downside of requiring code (and code updates) on the phone to interpret SMS messages. That’s not a readily scalable solution.

In order to use the service you need a TeleSom SIM for your phone. In order to sign up you need to go to one of the official offices of TeleSom. The first thing that is wild about this is that you are essentially opening up a bank account, but you don’t have an address, social security number, or anything to prove who you are. TeleSom developed their own system that involves a photograph and biometrics when you sign up. So any disputes later on can be resolved definitively. Because they are not the government, they are being “trusted” to not do the wrong thing with this information. This is critical because as you can imagine, Somaliland has refugees and other internally displaced people who probably have very good reasons for not being “official”. Once you have an active account, you are free to use the service. You only need your SIM (and a phone) and a self-selected PIN number (which you can always change).

In addition to a personal account, it is possible for a merchant to establish a merchant account. So if you are a larger business with multiple employees (rather than an owner-operated market stall) you can create a unique Merchant ID to use rather than an individual phone. In practice, most merchants are sole proprietorships and use a “personal” account (sort of like using a personal phone number at work).

The service provides a core set of functionality all over the phone:

  • Deposit cash

  • Withdraw cash

  • Send money to another person

  • Pay a bill to a merchant

  • Transfer money

  • Charge their SIM

The menu (generated via the USSD protocol) looks something like this in Somali (notice the Chinese handset—the China presence in Africa is itself a major topic):

Feature phone with Zaad menu

As you can imagine, deposit/withdrawal represents the most popular functionality. The service would not be so interesting if you had to always trek back to the main TeleSom office to do this (that would be like it was for my old passbook account). Instead, you can go to any authorized Zaad kiosk for the transaction. As it turns out there were already tons of these kiosks because that is where you used to charge your SIM (in fact they are everywhere you look—it is an easy side business for anyone with an existing point of presence). You walk up to a kiosk and just do a few steps:

  1. Dial *888# [Note this is how you access things via the USSD protocol]

  2. Enter your 4 digit PIN

  3. Enter 3 (Withdraw Cash)

  4. Enter Store ID

  5. Enter Amount you want to withdraw

  6. On success, you and the store will receive an SMS confirming the transaction.

  7. The Store will then give you the cash.

Depositing is even easier. Walk up, hand a guy money, receive an SMS. To recharge your phone, you give the same person some money and you get an SMS with a voucher number for the amount of airtime/texts you bought. Capitalism rules at the kiosks—the fees charged are up to the merchant.

If you want to pay a registered merchant, you use a Merchant ID provided to you and then both of you receive confirmation texts.

When you see this all in action it of course feels a bit like PayPal, except it takes like no time at all. What do these kiosks look like? They look just like money changers—dollars are kept locked underneath or in a physical store behind the people in the photo. Those dollars are really worth something. That number on the stand is the Store ID [the code to use for transactions]. Ben Franklin never looked so good!

Zaad money kiosk

Let’s talk about developing economies and one of the biggest sources of money people use to live. Money doesn’t just come from the transactions for your own goods and services, but it comes from remittances. This is money sent from people who have emigrated and are making more money in another country. This is billions of dollars a year, even for Somaliland. In fact, a huge source of these funds is right here in Seattle, which has one of the largest Somali populations in the US (second to Minnesota). The closeness of family (clans) creates these centers of population. [Note. This is still true today and now we know for Somalia proper the challenges that currently exist in the US, especially in Seattle.]

It has historically been very difficult to remit money and in a place lacking infrastructure (there is no postal system here) and addresses it is impossible. Remittances are commonplace in the Islamic world and are called, Hawala. In most Islamic areas there are services that have developed that replaced the person-to-person process used for centuries. In Somaliland, the global Dahabshiil is a for-profit that specializes in Hawala is used by many. Zaad is disrupting them to some degree, but also partners with them on the backend to actually do the transfer.

Perhaps the coolest thing about Zaad is that it was built as a home grown system. All the work happened in Somaliland, according to the people we talked to. And they are very proud of that—it is created, owned, and operated by the TeleSom company.

The role of banking as a platform element is well understood. But few of us could imagine building a banking system from scratch, and that is just what Zaad is. The positive reinforcement loop is readily apparent, to both the people using the service and to the folks that run TeleSom.

There are some big questions that TeleSom will face as the service becomes ubiquitous. We had a chance to talk to the leaders of the service to learn more about how they will approach moving the system forward.

Security. It is amazing to think that people with very small amounts of money will literally trust all their money to the phone company. Yet in a world where there are few entities to trust, the phone company is reliable and solves a lot of problems for you. In addition, TeleSom provides a great many transparency services. About 6 months after the service launched, a web-based system was introduced. One of the key benefits of this is being able to print out a full transaction history. This allows you to have your own records for future dispute resolution. And of course keep in mind that your photo and biometric identity are on hand. We [Windows 7 and 8] had discussions about fraud, PIN-lockout, and so on and it seems to me they are at least as aware of the challenges and solutions as one could hope.

Future services. One of the biggest areas for opportunity for Somaliland that is currently being considered is for the municipal government to find a way to use the service to pay taxes or to services such as utilities. There are some obstacles to doing this, first and foremost is that the government does business in Somaliland Shillings. M-Pesa became very complex because of all the services. It is not clear Zaad needs to do this right away but they also want to make sure they are aggressive. In talking with the folks it sounds like they are very much geared up for a feature war and want to move fast and not get left out.

In terms of a positive reinforcing loop, imagine how good being able to easily pay taxes and fees could be. Today compliance is low, not because it is bad to pay, but simply because it is too hard to pay. If time is your only asset, then spending a day trekking to an office and waiting on line to pay a monthly occupation or business tax/fee is simply prohibitive. If more people pay, then services can improve. And if it costs less to pay (fewer offices and manual processes) then more money can go to services. What an awesome platform for the government.

Competition. SomTel is already developing their competitive service. This is as expected. You can also expect the remittance firms to feel the pinch as well. So competition will heat up. And when competition heats up…

Interoperability. Remember when ATMs used to not talk to each other. Remember when Cirrus and PLUS had to litigate in order to get to work across banks. It is interesting to think about how these systems work when two people have different carriers attached to their SIM. I bet cash becomes the short-term intermediary. But that starts to look a lot like a $3 ATM fee for a $20 withdrawal. Sigh, but the good news is folks are clearly aware of and up to the challenge.

Regulation. M-Pesa exists in countries that have banks. And those banks are not happy. As you can imagine they are doing all the lobbying they can to get the operators to be regulated as though they were a bank. This is much more challenging than it appears. First and foremost, they don’t really offer the key services of a bank which is to take the money deposited and loan it out (which is why, for example, micro finance institutions do get pulled into regulatory discussions). This means all the things regulations are there to protect, primarily reserves on hand, do not apply. You can bet that the TeleSom folks are on top of this. They are particularly sophisticated about this topic even though they have little regulatory experience. In our discussions it was clear that some folks have spent time abroad learning. For the time being, they are going to be able to avoid being called a bank since there are no banks to compete with them.

One super interesting thing to keep in mind is that all transactions on Zaad happen on the books of TeleSom. It is simply moving existing money from one SIM owner to another. The risk is very low and the transaction costs effectively zero. That’s nothing at all like a bank that takes your money and gives it to someone else outside the bank as a loan (or invests it). With no money leaving the system, this is a different type of bank. That’s why interoperability becomes a challenge—it means companies need to trust each other and risk is increased. As long as TeleSom is solvent and does not attempt to profit from your “deposits” things are very safe and very cheap.

Revenue. The other reason regulation is not going to be easy is because TeleSom does not currently charge for the service. To me this is the most interesting question. The TeleSom folks are quick to say they are all set up to charge and ready to go. This is, after all, what M-Pesa does (and why the service is 10% of Safaricom’s revenue). But this is not such a no-brainer to me. The service does not cost much to operate and essentially pays for itself based on the reduced costs of adding minutes (since you already have their cash on “deposit”). And it is certainly going to alienate people. It does feel like there is a Bank of America moment waiting to happen.

There are many ways to potentially charge and seeing this evolve will be very interesting. The PayPal model seems interesting – charge on the receipt of funds. But perhaps cash transactions should be free, though that might create an artificial incentive. Should government transactions (once the currency is solved) be paid for by a flat fee or percentage, by the government, by the utility? Who pays in a revenue model is a complex question. And given the amount of money any individual has, these will create new behavior patterns—like those of us who refuse to pay ATM fees so we choose banks differently or seek out specific ATMs.

In previous posts (“de-monetization”) we have talked about how in a technology driven product, services that used to cost money or were sold at a profit by one party, have a likelihood of becoming a free part of someone else’s offering. To use a common analogy from the [internal] LITEBULB discussion group, Zaad is almost certainly like the free parking at a Las Vegas casino. While parking is charged for in other places, there is little reason to charge if you know you will make money at the game tables. [Note. Las Vegas casinos tried charing for parking and most have stopped after that failed experiment.']

Zaad is a remarkable service. It is remarkable for how it revolutionized the economy in a year (not an under-statement). It is remarkable in how it has penetrated most every family. It is remarkable in that it is complex technology developed in-house in an area of 4 million people and delivered reliably, robustly, and throughout a network of thousands of partners. Each of those partners make money directly from the service. The amount of friction in the economy has been reduced. The more people have the system, the more people want to use it.

Banking has always been a platform. But who would have ever thought we’d see an entire banking system built from the ground up by a local company connecting little metal kiosks to mobile phones, in a place where the per-capita GDP is under $250.

--Steven

# # # # #

]]>
<![CDATA[2025 Books: Reading, Learning, Thinking (235)]]>https://hardcoresoftware.learningbyshipping.com/p/235-2025-books-reading-learning-thinkinghttps://hardcoresoftware.learningbyshipping.com/p/235-2025-books-reading-learning-thinkingThu, 11 Dec 2025 22:02:29 GMT

What a year it has been. It has also been a year where I chose to read books from a wide range of topics, some well outside the technology and innovation non-fiction I almost exclusively read. With books on Apple, Nvidia, China, and AI there was plenty in my traditional wheelhouse—some very good and some not so much IMO. Like many hot topics there were often multiple books from different perspectives so I was glad to find the one I might recommend to someone looking for one.

As always, these are not affiliate links so find your favorite outlet. Most every book is available in multiple formats including audio which is my favorite these days. Books presented in order I read them, most recent first.

Happy Holidays and Best Wishes for the New Year!

PS: As an author, I know the personal pain of even the slightest criticism but I am offering my honest opinions below. 🙏

Subscribe now

1. The Notebook: A History of Thinking on Paper by Roland Allen https://a.co/d/4HhtAu1 // [no, not THAT Notebook] what a fun book. In the spirit of the Petrosky class, “The Pencil”, this book goes through the history of the notebook, sketching, and taking notes. Loved it. Great way to end the year of books. I 💯 in on paper notebooks. Book starts with a very fun and slightly snarky history of ever present and not as old as you think moleskin. For the technologist this book covers the technology of notebooks. For the intellectual it covers the psychology of note taking. It is super fun in the spirit of Petrosky books like The Pencil.

2. Nineteen Eighty-Four, Brave New World, Fahrenheit 451, The Time Machine, Animal Farm. The “big three” or four or five of dystopian fiction as often (in olden days) studied as a unit in middle school or early high school. Seems like the time is right to re-read these because so much of the 20th century is being revisited. Also, because no matter which side of the political spectrum you’re on, there’s a good chance you see dystopia in what the other side is doing or saying. There are some wonderful audio book productions of those to be enjoyed as well. Wild how much the language in these is just our language. Double plus good :)

3. When Everyone Knows That Everyone Knows . . .: Common Knowledge and the Mysteries of Money, Power, and Everyday Life by Steven Pinker https://a.co/d/fvHGzgv // how many levels of lying before something becomes common knowledge? This book asks that interesting question as it dives into the topic of common knowledge. What is common knowledge? It is stuff that I know and you know, and I know that you know, and I know that you know that I know. You will get the hang of that after about the 20th time the book repeats this phrase. For me it just made the book and the ideas difficult to wade through. There’s obviously something there, but I struggled.

4. The Money Trap: Lost Illusions Inside the Tech Bubble by Alok Sama https://a.co/d/fRu9Ny2 // Liar’s Poker meets the crazy guy. This is a memoir by Alok Sama the former chief deal maker at SoftBank. This is finance porn through and through. I lost count of how often the price, brand, size, or exclusivity of some consumable, home, or article of clothing was vividly described. Every meeting was described starting with the private jet taken to get there. The porn never ends. It also has the kind of humility you’d expect such as the tiresome criticism of Masa San for holding investments too long but then is critical—without saying who made the call—about selling Nvidia too soon. It’s an easy read but not one to learn about SoftBank or Masa. The epilog demonstrates the true nature of “crazy guy” and how it is the outlier deal that makes up for all the losses, something a “cook” (Masa’s—the hunter—description of Alok) doesn’t always appreciate. I appreciate much the honesty with which the story is told.

5. 🕸️🕸️🕸️This Is for Everyone by Tim Berners-Lee https://a.co/d/iL3IGU6 // This book is part memoir of the WWW technologies from the start, part personal memoir and victory lap, and part opining on the past and future of the connected world. In that sense it covers a lot of ground, but it also can leave you wanting more in any one of those. There’s a lot of reflection over how things turned out on the “internet” versus perhaps the original intent, or at least intent as described today. At times this reads a bit like “what hath man wrought” and regrets over the forces that took web technologies in directions different than TBL would have. Perhaps the most interesting journey in the book is one of life—going from the “hands off” view of regulation in the early days to a gradual embrace of regulation then to a need for regulation to today’s AI world which needs precautionary regulation. It reaches the peak with this statement “Course correct AI now before the exploitative use patterns of the past decade repeat—now is our chance for a do-over” contrasting sharply with the earlier statement “I’m not one to believe in regulation but…”. There’s a similar arc when it comes to the author serving as director of W3C and not wishing to employ the authority of a centralized leader of the web to the end where it is clear not only did he exercise that authority, but he seemed to view it as necessary to counteract the “forces” of the larger companies. The book spends a great deal of time on personal data and rights to privacy—views I see as the classic libertarian views of MIT and the earliest hackers in a positive way—but proposes a technical solution (Solid or data wallet) that clearly can’t work without just freezing the world in a government mandated top-down manner and even then it is unclear how this becomes a privacy win (anyone you provide your data to can, or must, store it unless you mandate all computation happens on a device but then the standard must encompass a universal cross-device runtime as well as a similar data store, the web doesn’t really do that today even if some thinks it comes close). I always struggle when someone starts from the untrue argument that the biggest social networks/search engines “sell your data” as they decidedly do not sell data (that’s their whole moat). There are a lot of moments from the past that make an appearance which makes the book fun to read for an old-timer like me: debates over formats like RDF and RSS, debates between competing web standards bodies (W3C and WHATWG), debates over minor things like colors of links or IMG tag, and more. I recommend reading it because it is important but not because it is easy to agree with. The audiobook version has a bonus interview between TBL and Stephen Fry that I think validates some of what I said above.

6. 🙀🤮 If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky, Nate Soares https://a.co/d/4fN6CUa // #1 Best Seller on Amazon! This from the doomer of all doomers so I had to read. It is ridiculous. I want this time back. I was lost at the comparison of AI to pre-NAZI Germany and the holocaust. Someone described this to me much better than I could. This is a book about debating the fantasies of AI much like debating battles in the MCU. Anything is possible but nothing is real. Read this by Thomas Dullien instead http://addxorrol.blogspot.com/2024/07/someone-is-wrong-on-internet-agi-doom.html

7. ⭐️⭐️⭐️⭐️ Underground Empire: How America Weaponized the World Economy by Henry Farrell, Abraham Newman https://a.co/d/7SalE5E // This book makes a strong argument about liberty and why trying to control the global financial system as a political tool of economic warfare has backfired. It is most relevant to the USD reserve currency concerns many have. My view is this book pairs nicely with Chokepoint below which tries (and IMO fails) to make the case for economic warfare. I really liked this book even though it is critical of some people/ideas I agree with. Loved the discussion about Microsoft and how tried to be clever only to find itself “unknowingly” complicit in this surveillance and ultimately came to define itself as an arm of the government like banks. The book explains broadly the incredible dilemma large multinationals face when operating under the threat of regulatory retaliation for not doing what is illegal for the US government to do on its own.

8. 🇺🇸🇨🇳🦫⚖️Breakneck: China’s Quest to Engineer the Future by Dan Wang https://a.co/d/9WBG0Iu // This is a wildly popular book about the current economic competition between China and the US/West. I really love the framing of “engineer v lawyer” that permeates the book. Dan goes through the specifics of this framing, the difference in “building” (a favorite tech topic), the rise of technology in China (see Apple in China below), and decidedly draconian policy failures in China on Covid and one-child. The final chapter of the book is a call for a “best of both worlds” for the US to embrace engineering more Dan and I did a podcast on the book which is on the a16z site.

9. 👍The Journalist and the Murderer by Janet Malcolm https://a.co/d/24QVrbw // a favorite of my friend @balajis is from a decade ago and was ahead of its time for many. The begins by describing the pain one feels after cooperating with a journalist who comes to you and says they want to tell your story, but when the story releases it is anything but. Until you’ve felt this pain, anger, humiliation, betrayal or worse it is tough to really say what it feels like. Whether a story about you, a person you know, events you witnessed, or events you participated in, the feeling is unique. You can be triggered for years after and with the internet these stories are amplified repeatedly, never to be filed away. It truly does suck. But why is that? This book explores the whole process of journalism and the mindset required, even when it hides behind “accountability”, “truth to power”, or “first amendment” and the inherent conflict when the journalist claims to learn along the way and changes sides in a story. This is all told through a real-life story of a journalist writing about a murder.

10. 🫤 The Sirens’ Call: How Attention Became the World’s Most Endangered Resource by Chris Hayes https://a.co/d/fAkCNM9 // This is a book about attention. It didn’t capture mine because it was too “pop psychology” and relied on what I think of as relatively shallow non-primary sources we see in media, advertising, etc.

11. 🌴🤡😹😹😹Class Clown: The Memoirs of a Professional Wiseass: How I Went 77 Years Without Growing Up by Dave Barry https://a.co/d/daVZgbe // Dave Barry holds a special place in my heart because when I was in high school he started writing for the Miami Herald and quickly it showed up in the Orlando paper and my best friend was immediately tuned into it. His books were the kind we read on the plane or bus on trips and loved them. The humor—reflecting our most decidedly that back of the classroom smart ass mocking everyone and everything that was so 80s—made us feel more grown up than we were and I think only later did we understand the layers of meaning and seriousness of his work. This was an enjoyable memoir and of course an amazing read. Hilarious description of his first real job as a business writer coach. Amazing stories of that crazy era of journalism where one could get paid 3x the salary of an engineer and have an unlimited expense account to write one column a week. Wild times even he admits. The audio book is a treat because Barry narrates. I listed while morning walking and laughed out loud with almost every sentence.

12. 👎🚫The Thinking Machine: Jensen Huang, Nvidia, and the World’s Most Coveted Microchip by Stephen Witt https://a.co/d/95I6W84 // I knew this book would be less satisfying than Nvidia Way from the introduction where the author described “prominent AI researchers wrote an open letter about the risks and dangers of AI…and Huang did not sign it.” Even the title “most coveted chip” is meant to sound less than positive (envy doesn’t invoke positivity). Then after the 4th time of recounting the stock price, the cost of clothes, or enumerating the compensation of named executives I began to lose patience. What followed were some choice insights such as “… All of this innovation will go through a single corporate siphon”—corporate siphon??? Then “ the largest single day gain of any stock in Wall Street history. Most of that was due not to what Nvidia had done, but what investors expect it to do in the future.” Pure. Genius. Then finally the book concludes with what amounts to an Atlantic essay on AI doomerism that has nothing to do with Nvidia other than the writer was fortunate enough to be able to ask dumb questions of Jensen. Do not waste your time with this book. Tae Kim’s book is useful. Reading this book will make you dumber about AI, Nvidia, and innovation. Truly cringe. <<Insert Billy Madison meme>> I’m sad that Jensen got interviewed for this book since it lent it credibility it did not deserve.

13. 🚀🚀🚀The Nvidia Way: Jensen Huang and the Making of a Tech Giant by Tae Kim https://a.co/d/aRJRNHv // just read this. There aren’t any good books on Nvidia and Jensen history and this one is very well done and current. I appreciate a book about a tech company that looks critically but not cynically or fatally. Go read this. Jensen is a legend and he’s not done by any stretch. Love the hat tip to “The HP Way”. The book is very accurate in terms of how it describes the technology. There’s one section “the engineer’s mind” that I don’t agree with but it’s a cliché and so it’s not a surprise to see it. If you’re a lover of culture, then this book is definitely for you. It also arrives for a technical level not often seen which I liked especially with respect to CUDA which is not discussed enough elsewhere. Wish it would have talked about ARM a bit more. Maybe that is the old battle though.

14. Chokepoints: American Power in the Age of Economic Warfare by Edward Fishman https://a.co/d/bEZthjO // this appears to be straightforward book that details the history of going to war not with rockets but with banking, trade, and reserve currency, but it is very much a sided political book by a participant. Depending on your perspective on any given issue, this book details a brilliant strategy or the way the government tells the private sector what to do without the actual legal right to do so. It tells the story of protecting the nation while perhaps chiseling away at the rights of all in a free society and harming innocent civilians far more than the government or military to achieve dubious ends. The book is told by an expert who served in the Obama administration in a sanctions capacity at State, though it is written as a third person account. It is not a particularly advanced or technically detailed work, but it provides a breadth overview of the complex world of financial regulations, sanctions, and statecraft. I found this book to be far too much of a success story for economic warfare (as it was called) especially as practiced by Obama/Biden. There’s so little sustained evidence that sanctions work with the most common examples being South Africa and Iran and arguably drug cartels, all of which ultimately remain disasters. Even the book praises the nuanced and multilateral approaches Obama took on the heels of the Russian invasion of Crimea which by some views set in motion the invasion of the rest of Ukraine a few years later where apparently those Biden sanctions were also successful. The conclusion was sanctions work but Obama was just too brilliantly collaborative and thoughtfully nuanced in approach. Then China, mostly Trump actions, were described as mostly erratic and ineffective which should be evidence of how sanctions don’t work but it was not. Most of all the challenge I have with claiming that economic warfare is effective is that the only victims in this kind of war are civilians and private companies. The government doesn’t suffer. Putin did not lose anything nor did his oligarchs, but everyday Americans, Russians, and Europeans suffer. Yet many of the “boycott divest sanction” advocates are often quick to call out civilian casualties in kinetic wars. And with China seen as a viable reserve currency sanctions simply punish the dollar, yen, and euro. As far as effectiveness see also https://web.stanford.edu/class/ips216/Readings/pape_97%20(jstor).pdf. I appreciate the realty of limited tools and real war is a horrible option, but without a grand reset of the global economy, economic warfare is overwhelmingly civilian casualties and has a limited to no track record of even medium-term success. I don’t think enough attention in the book was paid to the price we all pay in losses of freedoms and privacy in our own financial lives that economic warfare as a tool have introduced.

15. 🚀🚀🚀Gambling Man: The Secret Story of the World’s Greatest Disruptor, Masayoshi Son by Lionel Barber https://a.co/d/4JCHvt6 // I loved this book because Masa Son is one of the greats of the internet age. Most know him probably from his recent and somewhat mercurial investments like WeWork, but he has an incredibly long and wildly successful (with high highs and low lows) investing in the internet. Just read this. It’s great. To be fair this is not a glowing biography by any stretch and I know fans will not take to the negatives, and I have quibbles with some of the pettier descriptions. I view this book as capturing a lot of detail about a “crazy guy” that I and many admire. The most recent ARM deal was incredible.

16. 🤔 Blind Spots: When Medicine Gets It Wrong, and What It Means for Our Health by Marty Makary https://a.co/d/15kHhR8 // Dr Makary was recently appointed to lead the FDA. This book takes on several scenarios where the conventional wisdom from physicians turned out to be wrong: from poor research to falsehoods to the human condition. It is an interesting book, and it is easy to get drawn into it. That said, it seems where in a moment where institutional trust is at a new low but at the same time simply trusting those that lead the way in showing the circumstances that got us here isn’t right either. I found quite a few places in this book where the evidence to counter the bad science were equally bad and that was disappointing. This isn’t the scientific reference it should be. In the end the book seems to be a list of “conventional wisdom that is wrong” which is super painful to see but unclear what the answer to this is, especially when there is no way to make only evidence-based decisions in medicine. Ultimately in medicine not everything has a study or data, in general, or in the case of a specific set of patient conditions. Read with caution. Medicine has a lot of issues, but this doesn’t show that trusting those who found them to be more trustworthy.

17. ⭐️⭐️⭐️⭐️ The Conservative Futurist: How to Create the Sci-Fi World We Were Promised by James Pethokoukis https://a.co/d/aOONws8 // I really found this book a useful read. It is a great answer to both the “Abundance” book and “we wanted flying cars”. Whereas “Abundance” is more of an apology for not building things plus a reworking of a traditional agenda, I find this book to be a more sincere view of the cycles of the economy and in particular the damage that the post 1970-1995 public opinion and policy shifts caused and why a return (hence conservative versus a “right wing”) approach is needed. This book speaks to the great forces that have been at work that pushed the economy forward and the idea that growth is not bad and in fact is necessary. Super bonus points for references (lots) to Star Trek and Expanse.

18. 🔥The Day Wall Street Exploded: A Story of America in Its First Age of Terror by Beverly Gage (2018) https://a.co/d/9Q7ZtDt // I literally had no idea that a dramatic series of bombings—dynamite terrorism—were perpetrated in NYC against the financial centers in the early 20th century championing anarchy, socialism, and labor. All before 9/11 or the 1993 WTC bombing (both by what we think of as classical terrorism). This early domestic violence was really the start of a century of socialist or “left wing” violence in the US that received a ton of attention in the late 1960s and early 1970s (Weather Underground bombings, the urban violence across the country — think of films like Dirty Harry and Death Wish). The long trail of socialist and anti-capitalist deadly violence is really something, but more than that is the growth in the defense that the violence was justified on higher moral grounds. That brings us right up to Luigi and then Molotov cocktail attack against Jews that happened as I was reading this. This was another book about history that reminded me of all that I don’t know. The contrast between coverage of this violence/the memory hole of the turn of the century and the wall-to-wall coverage of “white nationalist violence” or “domestic right-wing terrorism” is quite stark.

19. Apple in China: The Capture of the World’s Greatest Company by Patrick McGee https://a.co/d/fiZlQ1Z // This is a book that details the level to which Apple has invested in China. My sense is (aka guess) the book was initially going to be mostly anti-Apple/exploitation of cheap labor, but since the recent election the narrative on China has dramatically shifted to what was previously a narrowly held view of state enemy #1. As such the book is both negative on Apple and China, painting Apple as an enabler of the “rise of China”. It is important to keep in mind that in the US each political party has held a minority view of China as enemy since Nixon opened relations. Back then and through Reagan even China was a military enemy but an economic opportunity to some and a trade enemy and fragile communist state that would flip to democracy to others. Under Clinton parties began to switch as the majority of democrats embraced the WTO and free trade while republicans became more skeptical of free trade. Democrats became focused on human rights while republicans focused much more on totalitarian enemy and much more protective of trade/jobs. The one constant view has been the subset of republicans who have thought of China as a military enemy and risk to economic security/jobs. This appears to many now to be a new consensus view replacing the free trade will beget democracy in China. Thus, you cannot read this book in only today’s political context. While I decided to read this because I is an important topic, to say Apple either because of its scale or visibility is somehow a culprit in whatever is going on ignore the trillions of dollars that flow to China from the rest of the world every year. It downplays the foundations of the tech industry that were in China/Taiwan already when Tim shifted Apple to China from the US, not to mention the “low tech” manufacturing that was outsourced to China already. I get the symbolism and the narrative desire to paint this as Apple and specifically Tim. But wanted to say that up front. We spun up a whole assembly effort in China for Microsoft Surface. The introduction and history of Apple is well told and worth it just to remind everyone of what went on from 1975-2000.this is primarily a connect the dots narrative including many selected quotes from Apple analysts who were wrong as often as they were right. Apple did this then that happened. Govt of China does that then Apple did something. It pains a very tight picture of a narrative of Apple getting played or willingly sacrificing much to gain and maintain its huge China business. My sense and experience don’t really support so much “evil” on either side. I think both sides were doing what they believed genuinely right and still didn’t have a clear multistep plan. The book mentions the changes is cars including the early success of VW, then Tesla, and rise of BYD. Many industries were like that. Many others were not. Chinese consumer electronics for example have not really achieved global status (though some due to trade barriers) for example TCL tv or Haier home appliances or software in general. This is one of the most important business and geopolitical issues of our time. Reading this book should name make you angry or partisan but should point out just how insanely complex what Apple did was and how wrong all the US experts were about China (I would out myself in there wrt to software). One of my pet peeves is when people portray China as corrupt or the Chinese government as heavy handed without considering that international (or domestic) companies in the US face these same issues just expressed differently. I haven’t written much about when I lived in China, but this post is recent as if this book.

20. The Elephant in the Brain: Hidden Motives in Everyday Life by Kevin Simler, Robin Hanson https://a.co/d/9huCau6 // This is a social science book that brings together several lessons, observations, and anecdotes about how we lie to ourselves. “The aim of this book, then, is to confront our hidden motives directly - to track down the darker, unexamined corners of our psyches and blast them with floodlights.” It all makes sense. It is important to know this is happening even when you’re certain you are being objective. It especially resonates if you write or tell stories about situations that you were a participant.

21. The Rise and Fall of the Neoliberal Order: America and the World in the Free Market Era by Gary Gerstle https://a.co/d/7P7G09N // This is a fascinating book about the rise of the “world order” that seems to many to have been undone since the election. The world that everyone knew until 2024 (excepting the aberration of 2016-2020) seems upended. But what if today and the huge shifts in the electorate and in beliefs (how many issues have flip-flopped between parties this year?) does define a potential end of an era. If you lived through the weirdness of the NAFTA vote that was split across both parties (that is each party was split) then you can see where this all starts, at least for me. Clinton embracing technology on the west coast and finance on the east coast while abandoning labor. He was almost Reagan-like in hindsight. But that schism wouldn’t endure. It isn’t unique though as Eisenhower, a general and republican, embraced the New Deal and perhaps more than anyone solidified it so that Johnson could build a Great Society. So much to think about. The only question not answered by the end was if Trump was the final straw, the last gasp, or start of the next thing?

22. 💯The Technological Republic: Hard Power, Soft Belief, and the Future of the West b Alexander C. Karp (co-founder Palantir), Nicholas W. Zamiska https://a.co/d/2ZwQKmX // This is a must read book as it presents a world view that has been in development for some time and in a sense harkens back to the “mission driven” world I grew up in during the Cold War, when being in technology was viewed as hand in hand with being part of a nation committed to set of ideals. The book is really three distinct sections though closely related. There is a history of technology and software, specifically working on defense projects and how that has gotten to a sad state of affairs. This is super important. Then there’s a lot on the “hollowing out” of the American mind which tilts towards a polemic of the 2000-2010 tech world, specifically the focus on consumer software. This feels a bit more like the memoir of the challenges and criticisms of Palantir during that time. It tilts to a criticism of free markets themselves which I don’t agree with primarily because I am not a fan of the alternative (the administrative decision-making state). The book ends with a section that is more optimistic which I appreciate. Don’t take my criticism for anything but a strong endorsement.

23. ❤️❤️❤️❤️On War by Gen. Carl von Clausewitz https://a.co/d/0t3H86o // I had never read this and with the raging debates about “rules of war” over the past 500+ days I wanted to. I only know about this book from the film Patton. This is the kind of book from another era (literally from the 1800s). von Clausewitz spent his life thinking about what to write before writing it and after how to revise what he wrote. When you read it, you can’t help but think of how focused and undistracted he was — he didn’t real time tweet out/workshop ideas, write interim blog posts, or appear on Sunday talk shows all before he finished. He just thought and wrote, for years. In fact, he basically wrote several books as a draft and intended to revise them. The book was published posthumously as per his plans. Amazing read and incredibly it stands. “War is a mere continuation of politics by other means.” Loved it. “War is an act of violence to compel our opponent to fulfill our will.” Even better. Warning, “Clausewitz is about as hard reading as anything can well be and is as full of notes of equal abstruseness as a dog is of fleas” (from real Gen. Patton). There is a chapter on the morals of war that draw analogies to fields from medicine to art where the mechanics are clear and unambiguous but as soon as they are applied to a specific situation the ambiguity of morals dominate and importantly everything falls to judgement and talent not abstract theories of how war should be prosecuted. Loved that—just like business. “All action in war is based on probably, not certain, results.” Probably need to read this a couple of more times to really get the most out of it.

24. 🇮🇱✡️On Democracies and Death Cults: Israel and the Future of Civilization by Douglas Murray https://a.co/d/bWKjUgY // This is a controversial pick for some as this 2025 book was Xeeted by the President (who is widely seen as one who doesn’t read books) after a controversial episode of Joe Rogan (didn’t listen). Since Murray isn’t Jewish, I wanted to read something strongly defending Israel that came from someone not raised in our culture and experiences. Last year I read a number is Islam histories and related works. The book is also a researched account of the Nova invasion based on many interviews extremely painfully recounted. He recounts generations of the paradox hatred or anti-Jewish behavior. Jews have been hated for being too poor and too rich, for being heretics and being too religious, for not having a state and then having a state, and so on... often at the same time by just different people. While the author is an historian the book isn’t an academic history book. It’s the kind of book where on X there would be replies “link please” which is now routine for popular history books. For that reason, it probably won’t change any minds nor did his appearance on JRE.

25. 😐Collective Illusions: Conformity, Complicity, and the Science of Why We Make Bad Decisions by Todd Rose https://a.co/d/67YFf4l // This is a pop social science book that offers high level descriptions of many social phenomena such as preference falsification, cognitive dissonance, and the introduced collective illusion, and so on. It is a selection of anecdotes and some good references to other works that pulls together a lot of what ends up happening both in large organizations and on social media. There’s no primary work here. It is probably more relevant today because of SM. I think this book is a reasonable way to absorb these theories than by reading Wikipedia articles if you’re not going to go to primary works on each. The author is a TEDx style speaker and has moved beyond academics to more pop efforts. As with all books like this there are too many personal and too convenient anecdotes. The reference of fMRI studies makes all this less appealing. In the end there’s a lot of confirmation bias in this style. Everything fits overly neatly.

26. 😻😻😻😻American Tabloid by James Ellroy https://a.co/d/7rBSUAS // This was my fiction book for the year. I was very busy with work in 1995 and don’t usually read fiction, but a friend strongly suggested this as their “favorite book of all time”. They were totally right. What an amazing read. After I read it, I googled as to why there is no movie, and now I am I part of the conspiracy theory. No need to tell you what this is about but everyone would likely enjoy this, especially today.

27. 🙀🙀🙀🙀Doctored: Fraud, Arrogance, and Tragedy in the Quest to Cure Alzheimer’s by Charles Piller https://a.co/d/gLHXOXl // This book is tells an incredible story that is deeply relevant in today’s world where the traditional institutions of expertise are at historically low trust. As angry as people got over Theranos, this tells the story of decades of absolute fraud and systemic problems at every layer in the full stack of medical inquiry from individual lab techs, PIs, journal and grant reviewers/editors, journals and retractions, citation logrolling, IRBs, animal testing, human trials, FDA, NIH, all the way to squeezing out other lines of thought. The only question is what else is like this? Must read.

28. 😻😻😻The Conservative Sensibility by George F. Will https://a.co/d/6Ei1Jut // George Will with his bowtie ever-present on ABC News on Sundays and the back page of Newsweek were fixtures in the 1980s Reagan era. He represented a sort of ideal of a school of thought that he often summarized as Americans being fiscally conservative and socially liberal with a strong defense which in many ways is what set up the post-Reagan era until Obama. This 2019 book looks at a classical view of “liberalism” an in particular the concept of natural rights which many believe is core to the Constitution and our nation’s founders. The book builds a historic and political argument for Madisonian views of the constitution around the most important issues we face today: destruction of the family, ever-rising expectation and promises of government entitlements, the federal government has “promiscuously” involved itself in every aspect of society resulting from the death of enumerated powers. Chapter 11 on income tax is excellent. His discussion of virtues versus values is especially relevant to today’s debates over Palestine and Israel. The final two chapters got a little out there, even as a fan I must admit. An excellent read.

29. 🧠🧠🧠Atlas Shrugged by Ayn Rand https://a.co/d/j90TZeA // Who is John Galt? Of course, I read this as soon as I got to Microsoft as it was back when the programmer-libertarian ethos was in full swing. I wanted to reread this having lived that whole experience for three decades. This is a long read with a good many characters, speeches, and events but well worth it. If you make fun of Tech people who like Ayn Rand but haven’t read the book you need to or at the very least watch Gary Cooper in The Fountainhead which is a literal translation of the book to film. It takes two views to let it all soak in, but it too gets the philosophy of course. “Tell me Roarke what do you think of me?… I don’t think of you.” ❤️

30. 🫤Revenge of the Tipping Point: Overstories, Superspreaders, and the Rise of Social Engineering by Malcolm Gladwell https://a.co/d/5erBcjT // This is a look back and a bit of a rewrite of Tipping Point from 25 years ago. I admit I can’t usually make it through his books as they feel like over-extended slide decks which I know is kind of rude. In this case it follows the standard social science popular writing format of identifying a known social thesis, then finding three examples which he shares with dramatic story telling and extrapolating that to a whole broad societal issue. This book tries to stitch together a bunch of work about “virality” starting with actual medical and working through “narratives” which he calls overstrikes, and then once again tipping points or critical mass. The book is benign and at least for me did not offer anything new. In a sense I am glad he did not coin a new phrase that will permeate LinkedIn.

31. 😻😻Twilight of the Elites: Prosperity, the Periphery, and the Future of France by Christopher Guilluy https://a.co/d/aT71WTr // This book is from 2019 and was originally in French. While it is about the French political and social situation you do not need to know much about French history to really understand what is being talked about. Why? Because everything is incredibly close to what is going on in the political landscape here between what we call the “coastal elites” and the “flyover states”. The electoral map has analogs in France and the relative behaviors outlined are spookily close. It is worth a read since with the distance of this being about France it affords a chance to reflect on the US.

32. 💯A Certain Idea of America: Selected Writings by Peggy Noonan https://a.co/d/e9IFy0U // A collection of topics covered over the past decades in her WSJ, this is a wonderful and optimistic look at America and what makes it unique. Noonan’s words were an incredible part of growing up and covered some of the most important moments in history that I lived through (Challenges, Berlin Wall, etc.) Her book “What I saw at the revolution” is a wonderful and important look at events of those years. I enjoyed reading this book as I admit I do not subscribe to read the column regularly. Many of the selected pieces were prescient on Covid lockdown, Prince Harry, — lost the thread , prince harry book review, success robots don’t go to elite school, Biden, and more. Her observations on the broad political dialog combined with her brilliant writing make for incredible reads. Noonan is quite negative on Silicon Valley and high on the risk of AI in particular. She does point out Silicon Valley is generally “left leaning“ which still seems to surprise many people. While relatively recent, her Sept 23, 2023, piece “Biden Can’t Resist the ‘River of Power’” proved to be perfect in every dimension.

33. 😻😻😻Pattern Breakers: Why Some Start-Ups Change the Future by Mike Maples Jr (Floodgate Ventures) and Peter Ziebelman (Stanford) https://a.co/d/43Z2caT // I loved this book and feel everyone involved in any aspect of early-stage companies (investing, building, working) should read it. The book offers an experience-based take on what makes for successful companies. Mike and Peter bring unique insights and experience and combine that with one of Mike’s super-powers of finding common themes across company success and failures based on exhaustive analysis. The book is a systematic view of characteristics of companies, ideas, and founders that lend themselves to success. It is presented with appropriate humility as well. It focuses on the title—the idea that great companies come from ideas and founders that are not afraid to break the patterns that exist in the market. Bonus: Mike narrates the audiobook which is wonderful. Disclosure: I’ve known Mike since he was in college by way of his father, Mike Maples Sr who was a legendary Microsoft executive and mentor to many (me!) who brought order, chaos, and maturity that was needed to create Microsoft Office. Strong recommend.

34. 😻Genesis: Artificial Intelligence, Hope, and the Human Spirit b Henry A. Kissinger, Eric Schmidt, Craig Mundie https://a.co/d/imnBHgq // This book is a rarity—an optimistic take on AI. In fact, to read this book is to presume AI will be with us and will transform many aspects of computing and life. Kissinger is somewhat of a surprise author on a book about AI (and he passed away just before completion), but he’s been at the leading edge of transformations for much of the 20th century. Eric is the legendary “adult supervision” at Google that came in at the earliest stages and brought scale and execution to a PhD startup. Craig is a longtime colleague from Microsoft (disclaimer) who has a long experience arc at the forefront of technology, leading one of the most storied “super computer” companies in the 1980s before coming to Microsoft to lead “advanced technology” and then ultimately served as the CTO and Chief Research Officer, advising not only Bill and all of Microsoft but leaders around the world on technology as the PC and then internet rose to prominence. The book looks at AI not through a technology lens but through a societal lens—if AI will transform then how should it transform, what are the challenges we can take away from past transformations, what are the opportunities that can best take advantage of transformational power. In a book economy filled with p(Doom)=100 this book is way less than that, but it isn’t 0. It tries to threat a needle and guide government leaders and executives (my view of intended audience) through the myriads of policy choices and potential impacts of the technology.

# # # # #

]]>
<![CDATA[234. If Writing is Thinking…]]>https://hardcoresoftware.learningbyshipping.com/p/234-if-writing-is-thinkinghttps://hardcoresoftware.learningbyshipping.com/p/234-if-writing-is-thinkingMon, 21 Jul 2025 21:15:34 GMTSomething I worry about with generative AI in business and commercial use: almost no one fully reads anything in those environments.

Now imagine when even the author hasn't read what was written... yikes. How does AI writing and reading impact this reality?

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and share my work.

I used to write long memos—significant ones—maybe once a year. I'd send them to thousands. That scale alone signals, "someone else will read it." I hoped direct reports and close colleagues would read them. I could count on 2 or 3 people to definitely read them.

Bill would read. Steve would read—but only if we discussed it in person, because that's how he worked.

Just an old memo featured in Hardcore Software.

I knew this, so I always made a slide version. I'd use it in dozens of team meetings. But even then, for months after sending a memo, I'd be referring members of the team back to what was in it. Could I have done better, of course. I did the best I could at the time. I figured once a year people could read 20-30 pages for their job.

People want context. They want the big ideas. But getting an organization—of any size—to actually read is almost impossible.

The only reliable thing people read? Org memos. And even then, if one (as I often did) didn’t include an org chart picture—rather than just words—people would skim or skip and wait for (hopefully) a tree graph in the email.

And these were from the “big boss,” sending out “big strategy.” So if you think folks in big orgs are reading 40-page PRDs, budget plans, new product proposals, or deal docs deeply and regularly… you're probably kidding yourself. I know how the Amazon process has evolved from friends there. It too is breaking down which is a bummer as I am a huge fan of that.

Now enter AI. What happens when it's doing the writing—and not even the author has deep knowledge of what was written?

That’s like a compiled or multiple author memo no one ever actually read end-to-end.

And if people are asking AI to summarize—but the summary is lossy or invents data—what then?

I say all this as part of the "TV" and later "MTV" generation. Back then, we were told that fast-paced, cut-cut-cut media made us incapable of absorbing anything. Meh...ok boomer, I know you can't follow the plot of "24" but that's your problem not mine.

So maybe this is just old man yelling at cloud. But for me? My entire career has been defined by the reality that people in business don’t really read.

And this isn’t just a tech or big-company problem.

Take science: the reproducibility crisis is, in substantial part, because almost no one—not even reviewers—carefully reads full research papers. Same for grant proposals. They look quickly for pet issues (like statistics, sample size or technique, or if their own work was referenced). They skip over what is outside their domain. They miss explicitly fraudulent work which takes effort to detect (maybe AI reading helps with that?)

Take Wall Street. Every day analysts output 30 page write-ups on companies with detailed financial modeling. Almost no one is checking all those. People consume them for B/S/H and mostly for narrative confirmation one way or another. Tens of billions of dollars change hands on these that few read and even the authors don't always know the entire thing deeply.

Just thinking…

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[233. "The Illusion of Thinking" — Thoughts on This Important Paper]]>https://hardcoresoftware.learningbyshipping.com/p/233-the-illusion-of-thinking-thoughtshttps://hardcoresoftware.learningbyshipping.com/p/233-the-illusion-of-thinking-thoughtsSun, 08 Jun 2025 18:15:10 GMT
Image

The Paper and Its Premise

At a technical level, the paper asks important questions and answers them in super interesting ways.

The paper was written by Apple researchers including an intern, The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity by Parshin Shojaee†, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar († was an intern). Full link to paper here: https://machinelearning.apple.com/research/illusion-of-thinking

I couldn’t help but walk away with the conclusion that the big problem is not what LLMs do, but the incredible “hubris” that has characterized the AI segment of computer science since its very beginnings.

Addendum: Many AI Doomers—they do not call themselves that—have latched on to this paper as proof of what they have been saying all along. While technically true, their primary concerns were not that models do not think or reason but that models can autonomously destroy the human race. There’s a big difference. The Doomers should not look to this paper to support their hyperbolic concerns.

Please Subscribe. It’s Free.


The Origins of AI’s Human Metaphors

While the earliest visions of computers in applications (going back to Memex even) were about “vast electronic brains,” the AI field was born with the pioneering work on “neural networks.” At first, this was about simulating the brain—which is where the term “artificial intelligence” even comes from. In many ways, there was a naive innocence to this, both in terms of computing and neuroscience. It was the 1950s, after all. See Dartmouth Workshop.

But then, through five decades of false promises and failed efforts—called “AI winters”—innovators chose to communicate their work by anthropomorphizing AI. After neural networks, we had the earliest chatbot with Eliza. Robotics focused on “planning.” Then came “natural language.” The first efforts at “computer vision” followed. One of my favorites was “expert systems,” which tried to convince us we could simply “encode the knowledge of humans” into a linear programming system and do everything from curing cancer to analyzing company sales data.


Machine Learning Rises, and Expectations Follow

As neural networks rose again in the 2010s—and with the work Google did prior to that on everything from photos to maps—the phrase “machine learning” became popular. The idea encoded in that term was that machines were learning. Yes, in a sense, they were learning—but not in any human sense.

The reason for the AI winters was that expectations raced far ahead of reality. While many concepts remained and went on to either make it into products or continue as building blocks, the massive letdown and disappointment were not forgotten. Many people were down on AI, and a big reason was that the field seemed so full of itself all along.

For sure, other fields in computer science were like that too. Object-oriented programming was a massive failure in terms of delivering a quantum leap in programming. Social interfaces failed in terms of usability. The “semantic web” came and went. “Parallel computing” never really worked. Much to the dismay of my department head, programs were never proven or formally verified. And so on.


Anthropomorphizing LLMs: The New Wave

The failures of AI were so epic that people ran from studying it, grants disappeared, and few departments even taught it—certainly not as a required subject. The few who persevered were on the fringes. And we’re thankful for that, of course!

With LLMs and chatbots, the anthropomorphizing really took off. Models learned; they researched; they understood; they perceived; they were unsupervised. Soon, everyone realized models “hallucinated.” We used to say “the computer was wrong” when it offered a bogus spelling suggestion or a search produced a weird result, but suddenly, using AI meant the result was equivalent to “perceiving something not present”—but it’s software; it didn’t perceive anything. It just returned a bogus result.

With Agents, they were going to act without any human interaction. Even using terms like “bias” or “chooses” are highly human traits (think of all the human studies that can’t even agree on what bias means in practice), and yet these terms were applied to LLMs. Most recently, we’ve seen talk about LLMs as liars or as “plotting their own survival,” like the M-5.

And of course, the ultimate—artificial general intelligence. AI would not only be human but would surpass being human.


The Absurdity of Humanizing Tools

Just this weekend, one company with an LLM advocated for a new “constitutional right” (my words), which would guarantee model-user “privilege” like what we have with mental health professionals or lawyers.

The absurdity of thinking we would humanize a digital tool this way—when we don’t even have privilege with a word processor we can use to type our deepest thoughts—is awkward, to say the least.

I would love to have more privacy and have advocated for it, but in no universe is what I say to an LLM more important or even different than any other digital tool.


A Reminder: These Are Tools, Not Beings

Along the way, there were skeptics, but they sounded like Luddites in the face of some insanely cool technology and new products.

Let me say that again—the technology and products are generational in how cool they are. They are transformative. This is all happening. The next big thing is here. We don’t know how it evolves or where it goes. It is like the internet in 1994. But it is like 1994. Many have raced ahead. It is going to take time.

In the meantime, anthropomorphizing AI needs to stop. It is hurting progress. It is confusing people. It is causing stress. It isn’t real.


The Costs of Anthropomorphizing AI

The use of this anthropomorphizing terminology has had three important effects, not all good:

  1. It attracted massive interest.
    AI was finally here. AI was always a decade out. Now it works. By using human terms, it was easy for people to imagine what Chat/LLMs did—but without actually seeing them in action, or only seeing what was shown in quick clips, posts, or pods.

  2. It created a false regulatory urgency.
    Anthropomorphized AI implies it needs to be controlled like humans—with laws and regulations. Worse, the assumption was it would be the worst elements of humans, plus it was faster, smarter, and relentless. The AI Safety movement was born out of easily expressed concerns based on anthropomorphized AI. If AI was smart and biased, or AGI and autonomous, then it must be controlled. Nothing poses a higher risk to our technological future than applying the precautionary principle too broadly to AI.

  3. It inflated expectations.
    We are right back to where we’ve been with all the previous AI winters—with a risk that everything unravels because of the mismatch between expectations and reality. That mismatch is the root of customer dissatisfaction.


We’ve Been Here Before

There is a well-known dynamic that has been part of computing from the very start: when humans interact with computers, they tend to believe what the computer says. This has been validated in many blinded tests, particularly research done at Stanford in the 1990s.

This was first noticed in the earliest interactions with Eliza. It was well-known through the 1980s that whenever your credit card bill or bank balance was wrong, the default was to blame the computer—and then assume you missed something.

The research was well-documented in the book The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places by Byron Reeves and Clifford Nash in 1996. I strongly suggest reading this. It had a real influence on how we approached building Office, including all the “agentic” features like AutoCorrect and, of course, Clippy and its natural language interface called Answer Wizard.

Thinking computers are right is, of course, dangerous—but it is human nature to defer to perceived expertise.


Lessons from Clippy: Humility in Design

We learned a lot of lessons building good old Microsoft Clippy. Among them was to be humble. The reason for the happy paperclip was precisely because we knew Clippy was not perfect. We knew that we needed a way to convince people notto believe everything it suggested.

Of course, we were right about that—just wrong about how infrequently we would be correct, given 4MB of RAM, 40MB of hard drive space, and no network. Tons more here in Clippy, The F*cking Clown if you’re interested.


AI Is a Tool—And That’s a Good Thing

AI is wonderful. It is absolutely a huge deal. But it is, today and for any foreseeable future, a tool. It is a tool under the control of humans. It is a tool used by humans. It is a tool, and not a human. The fact that it appears to imitate some aspects of humans does not give it those human traits.

For those reasons and more, we should not be concerned or afraid of AI beyond the concerns over how tools can be used, misused, or abused. That could be said of VisiCalc on an Apple ][, Word on Windows, Netscape and the internet, or more.


Final Thought: Tools, Not Minds

The humans are in charge.
AI isn’t human.
AI isn’t thinking.
It’s amplifying, assisting, abstracting, and more.
Just like tools have always done.

]]>
<![CDATA[232. From Typewriters to Transformers: AI is Just the Next Tools Abstraction]]>https://hardcoresoftware.learningbyshipping.com/p/232-from-typewriters-to-transformershttps://hardcoresoftware.learningbyshipping.com/p/232-from-typewriters-to-transformersFri, 30 May 2025 20:15:12 GMTAI is a tool-driven revolution. That’s why it unnerves people. Freeman Dyson said in 1993, “Scientific revolutions are more often driven by new tools than by new concepts.” That’s AI.

For those deep in tech, AI is clearly a new paradigm—a sweeping theorem of software. Most paradigm shifts precede new tools. But with AI, we fast-forwarded: from paradigm to tools in under a decade, now used by hundreds of millions. That speed doesn’t soften the typical reaction to new tools—fear, skepticism, even rejection. We’ve seen this with every major shift in computing. This post shares a few examples of that resistance in action.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

Image
This is an article from Physics World reproduced in Science magazine. As I was in the business of making tools it so resonated with me I had it pinned to my Microsoft cork board for decades....

On Bari Weiss’ Honestly podcast, a recent debate tackled: Will the Truth Survive Artificial Intelligence?

The “yes” side featured Aravind Srinivas (Perplexity) and Fei-Fei Li (Stanford); the “no” side, Jaron Lanier (Microsoft) and Nicholas Carr. I won’t spoil the debate, but a major theme was concern about learning, writing, and education.

AI is a tool. Tools abstract and automate tasks. Each new one adds abstraction and automation over what came before. That’s rarely controversial—until it touches something people are emotionally or professionally invested in.

Case in point: teachers once opposed typewriters in class. They worried students would forget how to write. They were right—by college, I could barely write cursive. But typing papers was faster and easier to grade. Teachers opposed calculators for fear of failing to add. But now engineers skip the slide rule and get vastly more done with libraries of routines and more.

My freshman year (1983) was a turning point. Two quick stories:

First: Most students arrived with typewriters—graduation gifts. You were expected to turn in typed papers. Rich kids had “fancy” models that let you backspace before a line was committed to paper.

Meanwhile, the university had a few WANG word processors—business-grade machines—available to select writing sections. Faculty were worried: if students didn’t handwrite first drafts, would they learn to write at all? That exact fear came up in the podcast too.

So we ran an experiment. Most students used pen and paper, then typed. A few of us used WANG machines for everything. Faculty planned to compare the results.

Then came January 1984. Macintosh launched. Apple pushed them onto campuses. What the faculty hoped to study was rendered moot overnight. The tool leapfrogged the debate—just like GPT did two years ago.

The real issue wasn’t just speed. It was abstraction. Word processors offloaded spell check, formatting, and editing—freeing us to focus on content. Educators already complained about poor spelling before grammar checkers showed up.

Second: I was a computer science major. CS was a new discipline then—separated from electrical engineering in the 1960s. Its foundation? Abstraction: you didn’t need to solder circuits to build software.

This wasn’t universally accepted. Many schools kept CS under engineering, requiring EE and physics. That meant you still learned transistors to write code. My school dropped that. We were among the first CS majors who didn’t take physics or EE—and some argued we’d never truly understand computers.

They were wrong. That too was an abstraction.

AI is the next abstraction layer.

And like all previous abstractions, it’s criticized on two fronts:

  • Loss of fundamentals: New users won’t understand what came before. That’s true. But also true: they can do far more than previous generations. Abstraction is about not needing the old tricks. No one misses manually hyphenating or footnoting on a typewriter.

  • Lack of understanding: Critics say people won’t know how their AI-generated results were made. That’s a weak argument. When a carpenter uses a nail gun, do we say they no longer understand roofing? I know what my computer is doing even if I’m not flipping bits manually.

So why the negativity around AI in learning? Is it just a replay of new technology in schools?

It’s not unique or new. People say students will get lazy, not “really” understand, or miss what “matters.” The same was said about word processors. And Macs. And dropping EE courses.

Growing up, we were drilled on how to find things in the library—but never allowed to use the encyclopedia. That was “cheating.” Odd, since many families invested in full encyclopedia sets.

Then I discovered the almanac. Game over. I won every classroom research contest using that book. We bought a new edition every year. It felt obvious. That instinct is why my dad bought an early PC. My first thought going online? “Now I’ve got a real-time almanac—at 300 baud.”

What we hear today about AI—worries about truth, AGI, or education—isn’t really about those things. They’re dressed-up ways of resisting change.

Writing and learning with AI is the typewriter, the word processor, the encyclopedia, and the almanac rolled into one. Seeing it as something scarier than that is just fear of new tools and new paradigms. Again. It is not surprise that we're seeing so much writing about concerns—writers are the ones who are directly challenged. Just as electrical engineers were challenged by software abstracting out hardware.

Some will say not all abstraction layers are created equally. Not all tools are “harmless”. And they would conclude if they believe that about AI that AI needs more scrutiny sooner and that we should slow down before we understand. The challenge is the future doesn’t just wait around for everyone to come to a consensus. It arrives with new tools in hand. That’s what happened in 1984 when Macintosh arrived. That’s what is happening with AI.

AI is here. It’s already happening.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[231. When It Comes to Tariffs, China is Different]]>https://hardcoresoftware.learningbyshipping.com/p/231-when-it-comes-to-tariffs-chinahttps://hardcoresoftware.learningbyshipping.com/p/231-when-it-comes-to-tariffs-chinaWed, 09 Apr 2025 22:00:53 GMTMuch has been said about how the U.S. benefits from China’s manufacturing strengths, and tariffs often dominate the debate around international trade. But what’s often overlooked is how incredibly difficult it is for American companies to sell into China and to build a sustainable business—particularly when it comes to services and intellectual property. Tariffs are just the visible tip of the iceberg. Beneath the surface lies a vast, complex web of soft barriers, regulations, and cultural dynamics that make the market nearly impossible to access in a fair, sustainable way.

I spent 15 years at Microsoft navigating these waters, including time living and working in China specifically to work on building a real business there. What I experienced was far more challenging—and revealing—than any tariff dispute.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

Image
Me at some of the many official functions I attended in China in the mid 2000’s to cooperate and develop IP strategies. (personal)

Microsoft’s first foray into Asia was Japan in the late 1980s. It wasn’t easy. There were technical hurdles like no UNICODE yet, a strong local preference for domestic products, and government policies that subtly (and not-so-subtly) favored Japanese companies. In many ways, it was not unlike the “Buy American” policies we see in the U.S. Still, with persistence, respect for local norms, and significant investment in product localization, we found success. Japan’s deep-rooted respect for intellectual property played a key role in that. By the mid-1990s the Office business in Japan was our most profitable in the world and customers—business and consumer—loved the product and how we tailored the distribution and software to the market.

Image
Selling Windows 7 at retail in Japan at launch (personal)

China, however, was an entirely different story.

From the outset, we encountered a maze of complications. An early version of Windows was banned entirely because some of the localization work had been done in Taiwan (see https://www.hbs.edu/faculty/Pages/item.aspx?num=23685). That was just the beginning. We responded with what we believed was good faith effort after good faith effort: We built a significant local development team, pioneered tools like the Input Method Editor that became a much-loved standard, established advanced R&D facilities, and adhered to every guideline for doing business in China—even hiring locals to represent the CCP inside our own offices.

And still, we hit wall after wall.

Piracy was the most obvious and frustrating challenge. While software piracy was a global problem, the scale in China was staggering. Roughly 90% of Microsoft products in use were pirated. Imagine a country with 200 million PCs generating as much revenue as Italy, which had just a quarter of the PCs and a piracy rate of “only” 50%. We used to justify this to ourselves by believing that someday those customers who loved the product for free would come to value it and with support of the government honoring IP would pay us. HA.

Visiting the bustling computer malls made the issue even more concrete. Five or more floors of computers—fully built to build your own. You’d select a system and then they would even assemble it. When you were finished, they gave you a menu to select your software and after a short wait they’d return with a customized CD with any software suite you wanted—Windows, Office, Photoshop—all bundled with serial numbers in a text file in the root and even some pirated movies thrown in for good measure. All for ¥100, the then equivalent of $12.

We pleaded our case in meeting after meeting with government officials. Over long banquets and countless toasts of Baijiu, we would talk about cooperation, innovation, and the value of intellectual property. The response was consistent: the government cited poverty, claiming it couldn't afford licensed software while simultaneously driving black Mercedes and dining in luxury above high-end Ferrari dealerships.

Eventually, some officials were more candid: “We do not believe in your same concept of intellectual property,” they’d tell us. “We believe knowledge should be spread and shared.” In theory, that’s a noble idea—one echoed by open source advocates—but in practice, it was a rationalization for copying and reselling our work without compensation.

Image
Typical story on the coverage of the launch of Windows 8 (2012) in China and the battle over piracy. (Reuters)

With Windows (and then Office) we introduced progressively stronger “anti-piracy” measures only to find customers then just sticking with older versions of software that were easier to pirate, versions that were also far more prone to security exploits. We were showing the government the designs and programs before we released then and yet still suffered from a strong campaign by the government against what we did. We were told our “methods” were not compatible with the market—a simple registration wizard. So, then you could go to the airport and see the warning of “unregistered Windows” pop up over the flight boards. The cash registers at my supermarket ran Windows XP a decade after it had been released rather than pay for Windows. PC makers came to ship PCs without Windows, overtly claiming it was because of US antitrust law they were respecting.

After posting a follower made this comment:

This joint venture suggestion was commonplace. The idea was to establish a shell company that “operated” the Microsoft assets and returned profit to Microsoft. It was touted as a “win-win” approach, which many many companies established for their efforts. A big part of China owning 49% was they would secure all IP rights as well as operational rights to what was going on in the company. In return a company was promised streamlining access to licensing to operate. In the case of the old MSN.COM that actually meant it was the only way to obtain a business license for an internet service, so we never actually closed a JV and ended up with no business. They had other requirements such as the servers all had to reside in China and be open to real-time inspection by the government agency (the Great Firewall). This JV solution was often the only hope of operating in the country and was clearly another form of tariff or even theft.

Image
Coverage from US media (Reuters, 2008) on the anti piracy measures we took with Windows Vista.

And it wasn’t just software. American and European companies across industries—pharma, fashion, publishing, autos—faced similar issues. I remember touring massive pharma manufacturing sites outside Shanghai. Many in tech are familiar with Foxconn and device makers but pharma is even bigger. While the official line was that these plants produced for Western companies, we all knew a portion of the output was being diverted and sold locally without compensation to the inventors.

Even consumer goods weren’t spared. On a rainy hike with colleagues from Microsoft China, I noticed everyone had North Face jackets—just like mine. But while mine kept me dry, theirs soaked through. They were knock-offs, made in the same factories, using the same logos, but with cheap materials.

I believed deeply in finding a path to success in China. I supported building our R&D presence, gave talks, scaled teams, and held onto the hope we could replicate the hard-earned success we had in Japan. But over time, it became clear there was no end to the compromises, and no real path to long-term business sustainability.

And we weren’t alone. Google left the country. Meta was essentially locked out. Microsoft’s revenue from China remains under 1% of global totals, even in an era of cloud and subscription software that’s harder to pirate. Even Apple—one of the few U.S. success stories in China—faces mounting pressure from government intrusion and local competition. Automakers like Ford have pulled back. Volkswagen’s market share is half of what it was just a few years ago.

It’s easy to focus on tariffs when discussing fair trade. They’re visible, quantifiable, and politically convenient. But in China, they’re far from the biggest obstacle. The real challenges are much harder to measure: soft restrictions, regulatory mazes, cultural gaps, and shifting definitions of fairness and property.

Yes, every country has its own forms of protectionism—the U.S. included. The EU has its tensions with American tech. But across decades, we’ve found paths forward in many regions. With China, after 25 years of effort, we’re still waiting for a meaningful breakthrough in how the tech industry is allowed to operate there.

So, when we talk about international trade, let’s not stop at tariffs. The real story—especially in China—is far more complicated. And far more important.

—Steven

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[230. MCP - It's Hot, But Will It Win?]]>https://hardcoresoftware.learningbyshipping.com/p/230-mcp-its-hot-but-will-it-winhttps://hardcoresoftware.learningbyshipping.com/p/230-mcp-its-hot-but-will-it-winFri, 28 Mar 2025 05:30:54 GMTIf you don’t know MCP—Model Context Protocol—is the hot, new, open source software layer promising to make connecting LLMs and domain specific software a snap. It originates from Anthropic and promises to “simplify artificial intelligence (AI) integrations by providing a secure, consistent way to connect AI agents with external tools and data sources.“ Already a broad set of companies have announced and even delivered support for MCP including OpenAi, Microsoft, Confluent, Cloudflare, and Cursor to name just a few. The fanfare of these announcements makes many observers quick to anoint a winner. It isn’t that simple though. If it were then innovation would quickly grind to a halt. Here’s why.1

The docs are pretty straight forward and it is easy to see why everyone from developers to app providers to enterprise IT are excited by it. Essentially MCP is a way to broker requests between AI providers and consumers while integrating with local data. Using the MCP terminology, a Host might be a desktop tool that wants to access AI from within the tool (like an IDE). Clients maintain 1:1 connections with Servers which are programs that exposes specific capabilities through the API (think of a service). Along with that there can be Local Data and Remote Services which are as you’d expect. It all makes for this tidy picture:

At this early stage of AI every one wants “in” and wants to start building but at the same time everyone is afraid of lockin on the LLM side and everyone surfacing AI in their existing products is afraid of their value add getting sidelined by native LLM capabilities. Having an API that allows LLMs to claim integration with any existing product is a clear win for any LLM trying to build a developer platform. Having an API that allows any developer to plug in a different LLM is a clear win for those products not wishing to put all their chips behind any one vendor. Enterprise IT is excited because one big promise of AI, especially agentic AI, is that everything will connect to everything.

MCP is quite definitionally the classic middleware playbook. That it was introduced by one of the key leaders in the field is itself a classic aspect of the middleware playbook. The industry has a long history of middleware. Many might not recall or even know, but the initial Dept. of Justice antitrust case against Microsoft was all about middleware and at the time the browser itself was viewed as middleware, as was Java. That there was one case about two instances of middleware that were supposed to also replace each other shows the complexity of introducing middleware.

The term middleware appears 38 times in the Consent Decree from 2006 but this is the key definition:

and

The point of middleware is always the same— integrate things on one side with things on the other side. The promise is always that betting on the middleware allows a single party to avoid relying on any instance of one side of the equation while being able to take advantage of the full breadth of market capabilities on the other side. Middleware is supposed to make it such that the best product wins for every customer. The customer being in the middle of both sides of the middleware. It means no one provider (or service) can dominate by monopolizing all consumers (of services) while consumers are placed on level playing fields with all their competitors.

All cross-platform software is a form of middleware. The creators of cross-platform middleware promise to make their middleware run on all operating systems while all the vendors using the middleware layer are granted freedom from worry about the nitty gritty of different platforms.

My own personal experience has seen everything from C to object-oriented programming to application frameworks to SQL data connectivity to business intelligence to HTML to browsers themselves all come and go as middleware solutions. Everyone even today knows a browser isn’t one thing but many. Even more history, when Microsoft started as a company (50 years ago) its claim to the market was creating BASIC to run on all the microcomputers of the time and then build its first MS-DOS applications to do the same using a proprietary middleware toolset which we still actively used for Word and Excel across Windows and Mac when I was a new hire until Office for Windows 95 and eventually Mac Office 98.

That’s enough background but will circle back to this point in a moment as to why this is challenging.

Middleware never quite lives up to these promises in practice. MCP, if history is any guide, will go down one of two paths:

  1. Everyone will use it.

  2. Everyone but one key platform will use it

Subscribe now

When (1) happens two things will be true. First, no one will effectively monetize it. Second, every vendor will also add unique aspects to how they consume (client) or produce (server) the interchange. This is internet networking, TCP/IP or HTTP. For (1), in practice the resulting feature is low level enough that all the interesting things will happen on top. Remaining at a low level can happen as something goes from "new and cool" to building out broader capabilities on top of the low level bits. You can think of the full HTTP stack as adding value all on top of the basic protocol.

When (2) happens that is because one platform is the leader along with our industry that has a long history of "everyone but.." APIs/protocols used by everyone but the leader. Usually by the time this leader "comes around" to being a first class citizen, it no longer matters as the industry has moved on to the next big thing as a center for innovation.

The reason this happens is because at the heart of the matter anything that everyone might want to be a producer/server will only retain its position if it has a user experience that is unique and monetizable. Eventually, one of those consumers will also become a competitor at being a server. Things then blow up.

Everyone that wants to be a consumer/client will be trying to consume *all* the other servers out there. They will, however, always be missing one important server, the leader who is busy trying to also become the consumer/user experience. Consumers who rely on everyone else being their server are in for a surprise when they see the lack of motivation to only be a server. Every enterprise customer, for example, will always be fighting to get one vendor to "open up". See how identity/security have evolved. There’s adverse selection in that the second and third-ranked producers and consumers often sign up eagerly for the middleware API in hopes of gaining distribution and/or traction.

On the whole, HTML followed this path. Everyone thought all the UI would just converge to standard HTML and consume XML from servers (or some variety of that). What happened though was that all the data people wanted to build their own user interface to maintain control of their data/server/service. And on top of that the definition of HTML took on many vendor specific implementations. No one directly monetized browsing (some even de-monetized it) while it took 15 years to get to implementations that reached 90% feature compatibility.

Why is this? The biggest challenge with middleware is that for it to be successful and not stifle innovation middleware needs to be the sum of all the capabilities that it chooses to integrate. Middleware always starts off simple because it always starts off in the earliest days of new platforms. The platform is just simple so it is easy to envision middleware. As mentioned that is when demand for middleware is the highest because uncertainty is the highest.

In 1989 no one was sure if Windows would work and if you were to pick up any computer industry press it was filled with evaluating new products based on which platforms they worked on. I spent the first 3 years of my career trying to make Windows, DOS, Mac, OS/2 apps all the same by building middleware. It was not difficult to write an abstraction layer for the 150 or so APIs that made up the core experience of each of those operating systems. I literally knew them, the error codes, the parameters, and more by memory. I still do. But that is a single point in time. Today iOS has 250,000 APIs. AI has infinite APIs. See this post for more on cross-platform.

I get it the excitement. The key realization I came to and talked about at the first Win32 Developer Conference when Windows 32-bit was announced was that to be great at working across all the platforms that were interesting, a toolset/framework needed to be a superset of all the capabilities of those platforms. To build a superset meant that building successful middleware required a development team the size of all the other development teams. I also knew how fast Windows had grown in just 3 years since Windows 3.0 and how many people were working on expanding it.

Now everyone always has their favorite example of when this works or how it worked for them. Games commonly get talked about as following a great model. Except games integrate poorly with every host they run on. Game runtimes in effect create a purpose-built OS for running games and offload a vast amount of work to game developers.

A cleaner example is database connectivity. In the world of structured data everyone wants to be free of Oracle in the enterprise. But there has never been a production API that could easily “choose” which database to use. This is wild when you think about it because SQL data is extremely well-defined. But in the real world, the performance characteristics for any non-trivial products using SQL vary depending on how Oracle or any other database might have been implemented. While the semantics might be the same, the implementation was different.

Anyone who has build cross-platform mobile applications knows how much Android and iOS have diverged, and they started from a huge and complex API that was already difficult to make cross-platform. And every web developer knows the ins and outs of browsers, even after Microsoft converged on Chromium. Meanwhile everyone from Oracle to Nvidia to Apple to Google wake up every day to make the job of making software work across these platforms with middleware even harder. It is literally the business strategy.

The other side of this is the consumer side, that is the side of the equation that wishes to present their product as integrating with all other products. This always seems benign enough. Who would not want to be able to pull data from every system into Slack or Notion or even Excel? Everyone would.

But there is a challenge with that in that the transformation from the user experience and data semantics of that tool to those generic tools that everyone uses all day is lossy. What does it mean to “use Excel to analyze Workday” or “use that applicant tracking system with Slack”? These integrations often make for good demos but the edges quickly show. Data can be viewed but not modified. The product can add new capabilities but the integration experience fails to show them off, even when that might be required. Very quickly what seems like an enormously customer-friendly solution if you’re Slack of Excel turns into a nightmare if you’re Workday, Salesforce, or something else. In fact much like the platform vendors who wake up making it harder for middleware to commoditize them, these application providers wake up every day trying to get people to stay in their tools and see all the capabilities.

This is why even when middleware works, the economics aren’t there. Or when it works, there’s always a set of key players—providers or consumers—that aren’t participating because they want to be “on top” and have the market power and customer base to do so. For every Oracle there’s a Microsoft SQL Server trying to be more open and win that way. And there’s always the open source alternative being used in large numbers but less so at massive scale (always the MySQL+Oracle strategy anyway).

All isn’t lost. The key reason that middleware can win and become the standard is when a company solves the middleware problem so well that it becomes the platform itself. In a sense it rises above being in the middle and becomes the actual platform. There are two examples which let me circle back to how I got the the (1) and (2) at the very start when it comes to how MCP can be a success.

It is possible for middleware to become the platform and win, but win hearts and minds but not the economics. Ultimately that is what happened with Google Chrome and browsers (at least for a “generation”). One can debate the economics and whether having that default search engine and cookie handling and so on had economics but there’s no debating there are no direct economics. The browser remains middleware. Unless you use a Chromebook, you run the browser as middleware on iOS, Android, Mac, Windows, and every other device. Google retains outsized influence on the direction browsers will go but also because it lacks full penetration across devices developers will be less likely to adopt new features. Winning isn’t so great sometimes.

It is possible for middleware to win and include everyone but one important vendor. That is what happened with identity and access management (“directory”). Okta became the market leader in a category that started off as middleware. They did so by solving the problem of integration even as the then leader, Microsoft with Active Directory (AD), refused to participate at every step. As it turns out Microsoft’s history at winning with AD played an important part of failing to recognize the importance of Okta to customers and the very need it solved.

Briefly, Microsoft “woke up” to the power of the directory when Erich Schmidt made a bet while he was CEO of Novell (before Google). Schmidt posited—based on doubt on his experience at Sun in the Unix era—that the PC networks will need a directory. Microsoft saw that and reacted by expanding AD into a competitive position. Because Novell was losing the server market, Microsoft’s AD was able to win by being part of the server market. Netscape then came along and pioneered an open source academic project, LDAP, in an effort to provide an Internet scale directory. While AD was woefully inadequate for the scale LDAP could manage, AD was adept at managing enterprise PCs (servers and printers too) and ended up winning again. The fact that Microsoft won both these battles with AD in a sense gave it the confidence to assume that everyone would write special code in their enterprise SaaS apps to integrate with AD. But that code was complex, Windows-centric, and kind of annoying to deal with because of how it was architected. Okta on the other hand subsumed that complexity and made it easy to do the one thing customers to do which was maintain one identity across all their enterprise products and keep it in sync with what they believed was the ground truth, AD. Ultimately Okta did this so well that it became the ground truth. So in an “everyone but Microsoft” strategy, the real winner was the company that did a massive amount of work to subsume what Microsoft seemed unwilling to do.

There are other examples and I’m sure people will always quibble or debate details. Having lived through the middleware world for so long, I admit to feeling the pattern rather quickly.

One final word about middleware APIs is unfortunately a weird way to end but an important point. Vendors pledging support for middleware might do so at the start with legitimate and sincere intentions. But over time the negatives of either being easily replaced or having their user experience and feature innovations ignored end up making the commitment to middleware look like a pretty poor investment.

Even well-placed intentions end up meeting market realities. Fear of commoditization, customer churn, or inability to innovate are powerful motivators away from middleware.

—Steven

1

This was originally a long tweet. It has been expanded and edited here.

]]>
<![CDATA[229. "Tech Goes Hardcore" ... Again.]]>https://hardcoresoftware.learningbyshipping.com/p/229-tech-goes-hardcore-againhttps://hardcoresoftware.learningbyshipping.com/p/229-tech-goes-hardcore-againTue, 25 Mar 2025 21:01:36 GMTOn an article this week talking about tech "going hardcore" (always a big fan of that word given Microsoft's 1980s recruiting slogan—hardcore software) which is mostly about RTO, meritocracy, and some cutbacks on perks, with a little bit of layoffs mixed in. See Tech Employees Getting the Message: Playtime’s Over.

Business Insider, March 19, 2025

Our industry goes in cycles. When I started at Microsoft in 1989 the company had seen a flat stock price for two years after the IPO pop (see October crash). Even though Windows 3.0 was about to breakthrough there was a bit of a cloud. Success and then Windows 95 fixed that. Then a decade later came dot com bubble and the malaise decade (2001+).

During that malaise decade we saw the rise of Yahoo, Google, and then Facebook/Meta. One of the most competitive aspects between all of us became something I still can't believe—office perks.

Google introduced in its 2004 S-1 that the cost of perks pales in comparison to the output of engineers. Everything from 20% time to massages to dry cleaning and of course three gourmet meals a day. From the S-1:

Subscribe now

Our employees, who have named themselves Googlers, are everything. Google is organized around the ability to attract and leverage the talent of exceptional technologists and business people. We have been lucky to recruit many creative, principled and hard working stars. We hope to recruit many more in the future. We will reward and treat them well.

We provide many unusual benefits for our employees, including meals free of charge, doctors and washing machines. We are careful to consider the long term advantages to the company of these benefits. Expect us to add benefits rather than pare them down over time. We believe it is easy to be penny wise and pound foolish with respect to benefits that can save employees considerable time and improve their health and productivity.

Meanwhile Microsoft had private offices AND free soda AND Lipton soup. Also at Microsoft we had always said "people are our most important asset". We agreed with Google but had different ways of showing that I guess. It is worth noting Amazon also supercharged during this time and had historically (and today remains) extremely frugal, as retailers would be. The desks were still cheap surplus doors back then.

By then I was an executive and also leading much of Microsoft's technical college hiring process because Office hired hundreds from college each year. I started a blog TechTalk (most of the posts still remain but not all for some reason) to take on the competition and make the case for Microsoft.

One of the first posts I wrote was about perks and the whole idea of "work life balance". See Microsoft people were growing up and starting to raise the issue of balance while at the same time our chief competitors for college hiring were touting all these new perks to keep people at the office as much as possible.

My view then (and now) was that the hard work is a literal requirement for new ventures or "impossibly difficult" projects. I grew up in the shadows of NASA and knew plenty of people who gave it all for those big projects. And to _me_, bringing software to the first billion people on earth was that same level of calling.

BUT, doing so did not require the elaborate perks. Those did not create a culture and importantly might create an environment or culture of entitlement. Bill and Steve were always worried about those kinds of tradeoffs or costs, and really fought to keep Microsoft's perks (v benefits) at a sustainable level.

Competing with Google then was difficult because of the meals just to start with. The PowerPoint team had always been in SV (on Sand Hill Road if you can believe that) and the first challenge I faced leading Office was just how to avoid everyone going down the street to work for free meals as our PowerPoint office didn't even have a kitchen to prepare meals. We ended up having one meal a day brought in but that hardly compared to gourmet chefs.

Then in Redmond HQ, the HR and Benefits teams finally succumbed and put in dry cleaning pickup locations (though I was never sure who paid for the actual cleaning since I owned nothing requiring dry cleaning). I should mention Microsoft also provided towels in the locker room for employee use. Years later in a moment of frugalness the towels would be discontinued and there was a huge uproar. The towels eventually returned. I digress.

We had so many leadership team meetings and discussions about perks. Microsoft had 50-60K employees at the time. Every benefit multiplied by that headcount was an enormous cost. Plus we had massive presence across Europe where regulations required benefits many in the US didn't get such as mobile phones, transit, and extended vacation. But we always talked of the benefits of HQ itself.

While many wanted to increase benefits and perks we mostly held our ground because a) we believed in the work we were doing and the benefits of being part of that work and b) we just could not believe that as the new SV companies grew these benefits were economically sustainable. And c) most of all we agreed with the idea being espoused in SV about the level of hard work required to do great things.

Right then a BusinessWeek article appeared, Revenge Of The Nerds -- Again. It highlighted these perks and the battle between Google who was growing employees at the expense of Yahoo and Microsoft who was losing to both. They called it the "Willy Wonka Effect". It included gems like "Free perks range from gourmet meals at the company cafeteria to bathrooms equipped with digital toilets, where the seat temperature and bidet pressure can be controlled with a remote" as well as this killer:

Indeed, Google -- and, to a lesser degree, Yahoo -- has become what Microsoft used to be: a young, vibrant company working on the bleeding edge of the day's vexing technical issues. Before the Internet became the phenomenon it is now, Microsoft was a magnet for top talent interested in solving the toughest tech problem: making personal computing easy. Today, though, the gravitational force at the center of techdom is no longer the PC -- it's the Net. And while MSN holds its own with Google and Yahoo in terms of worldwide use, its engineers can't develop products that would undermine Microsoft's monopoly businesses, Windows and Office. Some researchers say privately that restricts creativity.

See Revenge Of The Nerds -- Again

Image
“Revenge of the Nerds—Again” from BusinessWeek July 29, 2005

That's why I wrote this post:

7/29/2005 On juggling, unicycles, and jousting

BusinessWeek has an article this week about the hiring binge at some companies in the bay area are doing (ok, Yahoo! and Google). The article raises a number of the standard “new high-tech hiring” clichés that are worth talking about because they seem to repeat themselves.

Let’s go back to 1989 – The Seattle Times Sunday Magazine ran an article the “Velvet Sweatshop” (some proud Microsoft people all got sweatshirts with that silkscreen – this was before zazzle when getting silk screening was a long lead analog process). The article was sent to me by my recruiter as I was about to make the trip from graduate school to start at Microsoft. The article was magic for me. It told the story of how employees work hard, work on great new technologies, and have a great time doing so. The article was my first exposure to some of the classics in “high tech employee” photos including of course people juggling and people riding unicycles, and of course the ever popular people riding unicycles re-enacting medieval jousting. But suffice it to say the article resonated with me (and scared my parents terribly since everyone had really long hair and wore flannel shirts even in the summer). Except for the 1983 Time magazine Machine of the Year article and the Microsoft Press Book, Programmers At Work, for me this article represented the best coverage of the programming culture I had seen to date (Steven Levy’s book Hackers later captured the spirit brilliantly).

Seattle Times, April 23, 1989 “Inside the Velvet Sweatshop”

Yet I have to admit, I’ve never actually seen programmers juggle, ride unicycles, or even joust. I have seen my share of golf in the hallways, filling offices with packing peanuts, and of course laser tag and water guns (all of these this summer at Microsoft!). That is to say, you probably can’t take literally what you read and at the same time the environment of fresh from college programmers has been pretty much the same, perhaps expressed differently, for at least the past 16 years that I know. It just seems that every new company generates the same “new” story about college grads joining computer companies.

Of course there is an element of one-upmanship that comes from these articles as companies try to portray their interpretation of the new-hire culture as unique. That's the marketing -- these articles are not spontaneous but come from PR efforts.

The one common thread among companies that hire lots of people new from college is that it is the very presence of a bunch of people of the same age, motivation, skills, and general attitude that yield the culture and not the company itself. The company provides the higher level goals (and the money to support the stuff) and chooses what aspects to reinforce. It was incredibly cool when I showed up at Microsoft—I was 23 years old and ready to go to work. I had no friends in Seattle. My family was 3000 miles away. I lived in an apartment within walking distance from Microsoft that had a pool where beautiful people hung out. I had disposable income for the first time in my life. I was ready to be one of those cool people on Melrose Place, except I quickly found out that the work at Microsoft was way cooler than sitting by the pool. I never got around to buying much furniture (Seattle now has an IKEA) and certainly didn’t get that Webber BBQ I wanted. But man, I wrote a bunch of code and learned more in 6 months than I learned in two years of graduate school—hardware breakpoints, real mode v. protect mode, real-world code generation, USER/KERNEL/GDI, Microsoft’s cool internal compiler and tools. It was amazing. (not to limit things to the past, this week was filled with just as many learning experiences for me personally).

When I talk with our new hires this summer what they describe is *exactly* what I experienced, except they all have broadband at home so they can work even if they get woken up in the middle of the night by that party upstairs. I had to dial in and read my email over 1200b CrossTalk.

I became friends with the people I work with. We would go out to dinner then return to work. We would see movies. Our work and social lives were blurred completely. It made for great fun and a great environment. It was our own Melrose Place, but with C++ code instead of an advertising agency. It had COMDEX instead of Venice Beach.

The magic is not about joining the next big thing, but the magic is in being part of something big *and* doing that with a peer group that is just as motivated and just as smart you (think) you are. What I found out about Microsoft is that they had managed to hire a hundred college grads who were better programmers and knew more about software than me and were anxious to learn even more. And we were all working within the structure that allowed us to create the next big things. And we were all friends.

Microsoft is more about that today than we ever have been. Going through the “internet transition” that our company is famous for reinforced just how much the “under 30” crowd has to offer product development and at the same time just how far a little bit of “adult supervision” can go. To be clear that was about reinforcing that idea, not creating it. Steve Ballmer hired college graduates starting in 1980 and as late as 1990 I remember that he was copied on every interview schedule from college. Hiring from college is a core of Microsoft. Training people to be professionals and to focus their energy, creativity, and ideas on really delivering software to the world is also our core. Maybe in 1980 we were all trying to impress each other with software. Now we’re trying to impress the whole world!

One lesson I learned at Microsoft that I didn’t expect was how much of a meritocracy the company is. I mentioned previously that at a startup you are more likely to be the college “kid” doing grunt work than you might think, whereas at Microsoft the college kid is going to be the one presenting their work to Bill Gates (as we did a few weeks ago when we toured Bill around our hallways for an afternoon to show in Office “12”). We do not have a caste system based on seniority or on what type of or which degree you received. Once a company starts telling you about the degrees that people have or emphasizing specific schools you should know that such feelings run deep and don’t go away once you manage to get a job. In fact, companies that do that often look to folks outside those pedigrees to do the grunt work. When I was in college it was Bell Labs famous for hiring Cornell grads, and then you get there and find out that all the talk about PhDs was simply because it was the people with PhDs that ran the place. I think when you look at the leaders of the company you are likely to see a company created in their model—if the founders value a PhD then you can bet that they will value that in employees. Microsoft’s founders are probably most famous for the degrees they didn’t earn and therein rests the focus on the merits and the accomplishments over a career. I thought I was all fancy with my degrees, but really what mattered was can I get done what I committed to and how good was the work I did.

So there is a lot of talk about joining these “new” companies and the "new" way they hire people and let them work. It is one of those stories that repeats itself in journalism. What you experience at Microsoft is as much about what you bring to Microsoft—this is a business, not an amusement park. The great people that are drawn to Microsoft every summer create their own thrill rides while we together build software that changes the world. So grab your unicycle!

--Steven

Here we are in 2025, with all the companies having gone through layoffs, reduced benefits, and the vibe shift as some might say about focus on execution, delivering, and prioritization of important work.

I think history will record that post-bubble era of perks and "Willy Wonka" as the aberration and what we are seeing today as the best practice for innovation. Nothing comes for free but it is also the case that businesses need to operate as businesses even when the times are good.

No matter whether the change is a return to what was normal or a resetting to a previous normal never experienced at a company, change is difficult for sure.

—Steven

]]>
<![CDATA[228. DeepSeek Has Been Inevitable and Here's Why (History Tells Us)]]>https://hardcoresoftware.learningbyshipping.com/p/228-deepseek-has-been-inevitablehttps://hardcoresoftware.learningbyshipping.com/p/228-deepseek-has-been-inevitableMon, 27 Jan 2025 06:15:50 GMT

TL;DR for this article: DeepSeek was always going to happen. We just didn’t know who would do it. It was either going to be a startup or someone outside the current center of leadership and innovation in AI, which is mostly clustered around trillion-dollar companies in the US. It turned out to be a group in China, which for many—myself included—is unfortunate.

But again, it absolutely was going to happen. The next question is US technologists recognize DeepSeek for what it is?

There’s omething we used to banter about when things seemed really bleak at Microsoft: When normal companies scope out features and architecture they use t-shirt—sizes small, medium, and large. At its peak, Microsoft seems capable of only thinking in terms of extra-large, huge, and ginormous. That’s where we are with AI today and the big company approach in the US.

There's more in The Short Case for Nvidia Stock which is very good but focuses on picking stocks, which isn't my thing. Strategy and execution are more me so here's that perspective.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

The current trajectory of AI if you read the news in the US is one of MASSIVE capital expenditures (CapEx) piled on top of even more MASSIVE CapEx. It’s a race between Google, Meta, OpenAI/Microsoft, xAI, and to a lesser extent a few other super well-funded startups like Perplexity and Anthropic. All of these together are taking the same approach which I will call “scale up”. Scale up is what you do when you have access to vast resources as all of these companies do.

The history of computing is one of innovation followed by scale up which is then broken by a model that “scales out”—when a bigger and faster approach is replaced by a smaller and more numerous approaches.

Mainframe→Mini→Micro→Mobile

Big iron→Distributed computing→Internet

Cray→HPC→Intel/CISC→ARM/RISC

OS/360→VMS→Unix→Windows NT→Linux, and on and on.

You can see this pattern play out at these macro level throughout the history of technology—or you can see it at the micro level with subsystems from networking to storage to memory.

The past five years of AI brought us bigger models, more data, more compute, and so on. Why? Because I would argue the innovation was driven by the cloud hyperscalers, whose approach was destined to do more of what they had already done. They viewed data for training and huge models as their way of winning and their unique architectural approach. The fact that other startups took a similar approach is just Silicon Valley at work—people move and optimize for different things at a micro scale without considering the larger picture. (See the sociological and epidemiological term small area variation.) People try to do what they couldn’t do in their previous efforts, or what their previous efforts might have overlooked.

The degree to which the hyperscalers believed in “scale up” is obvious when consider the fact that they are all building their own Silicon or custom AI chips. As cool as this sounds, it has historically proven very very difficult for software companies to build their own silicon. While many look at Apple as a success, Apple’s lessons emerged over decades of not succeeding PLUS they build devices not just silicon. Apple learned from 68k, PPC, and Intel—the previous architectures Apple used before transitioning to its own custom ARM chips—how to optimize a design for their use-cases. Those building AI hardware were solving their in-house scale up challenges—and I would have always argued they could gain percentages at a constant factor, but not anything beyond that.

Nvidia is there to help everyone not building their own silicon and those who want to build their own silicon but are also trying to meet their immediate needs. As described in “The Short Case” Nvidia also has a huge software ecosystem advantage with their CUDA platform, something they have honed for almost two decades. It is critically important to have an ecosystem, and they have been successful at building one. This is why I wrote and thought the Nvidia DIGITS project is far more interesting than simply a 4000 TOPS (tera operations per second) desktop (see my CES report).

So now where are we? Well, the big problem we have is that the large-scale solutions, regardless of all the progress, are consuming too much capital. But beyond that the delivery to customers has been on an unsustainable path. It’s a path that works against the history of computing, which shows us that resources need to become less—not more—expensive. The market for computing simply doesn’t accept solutions that cost more, especially consumption-based pricing. We’ve seen Microsoft and Google do a bit of resetting with respect to pricing in a move to turn their massive CapEx efforts into direct revenue. I wrote at the time of the initial pricing announcements that there was no way that would be sustainable. It took about a year. Laudable goal for sure but just not how business customers of computing work. At the same time, Apple is focused on the “mostly free” way of doing AI, but the results are at best mixed, and they’re still deploying a ton of CapEx.

Given that it was inevitable someone was going to look at what was going on and build a “scale out” solution—one that does not require massive CapEx and architectural approaches that use even less CapEx to build and train the product.

The example that keeps running through my mind is how AT&T looked at the internet. In all the meetings Microsoft had with AT&T decades ago about building the “information superhighway,” they were completely convinced of two things. First, the internet technologies being shown were toys—they were missing all the key features such as being connection based or having QoS (quality of service). For more on toys, see “[...] Is a Toy” by me.

Second, they were convinced that the right way to build the internet was to take their phone network and scale it up. Add more hardware and more protocols and a lot more wires and equipment to deliver on reliability, QoS, and so on. They weren’t alone. Europe was busy building out internet connectivity with ISDN over their telecommunications networks. AT&T loved this because it took huge capital and relied on their existing infrastructure.

They were completely wrong. Cisco came along and delivered all those things on an IP-based network using toy software like DNS. Other toys like HTTP and HTML layered on top. Then came Apache, Linux, and a lot of browsers. Not only did the initial infrastructure prove to be the least interesting part, but it was also drawn into a “scale out” approach by a completely different player, one who had previously focused on weird university computing infrastructure. Cisco did not have tens of billions of dollars nor did Netscape nor did CERN. They used what they could to deliver the information superhighway. The rest is history.

As an example, there was a time when IBM measured the mainframe business by MIPS (millions of instructions per second). The reality was they had 90 percent plus share of MIPS. But in practice they were selling or leasing MIPS (not the chip company from Stanford) at ever decreasing prices, just like Intel sold transistors for less over time. This is all great until you can get MIPS for even less money elsewhere. which Intel soon delivered. Then ARM found an even cheaper way to deliver more. You get the picture. Repeat this for data storage and you have a great chapter from Clay Christensen’s Innovator’s Dilemma.

Another challenge for the current AI hyperscalers is that they have only two models for bringing an exciting—even disruptive—technology to market.

First, they can bundle the technology as part of what they already sell. This de-monetizes anyone trying to compete with you. Of course, regulators love to think of this as predatory pricing, but the problem is software has little marginal cost (uh oh) and the whole industry is made up of cycles of platforms absorbing more technology from others. It is both an uphill battle for big companies to try to sell separate things (the salespeople are busy selling the big thing) and an uphill battle to try to keep things separate since someone is always going to eventually integrate them anyway. Windows did this with Internet Explorer. Word did this with Excel or Excel did this with Word depending on your point of view (See Hardcore Software for the details). The list is literally endless. It happens so often in the Apple ecosystem that it has a name and is called Sherlocking. The result effectively commoditizes a technology while maintaining a hold on distribution.

Second, AI hyperscalers can compete by skipping the de-monetization step and going straight to commoditization. This approach is one that counts on the internet and gets distribution via the internet. Nearly everything running in the cloud today is built on this approach. It really starts with Linux but goes through everything from Apache to Git to Spark. The key with this approach, and what is so unique about it, is open source.

Meta has done a fantastic job at being open source but it’s still relying on an architectural model that consumes tens of billions of dollars in CapEx. Meta, much like Google, could also justify CapEx by building tools that their existing products better; open-source Llama is just a side effect that is good for everyone. That is not unlike Google releasing all sorts of software, from Chromium to Android. It’s also what Google has did to de-monetize Microsoft when they began Gmail, ChromeOS, and it’s suite of productivity tools (Google Docs was originally just free, presumably to de-monetize Office). Google can do this because they monetize software with services on top of what they do with the open source they release. Their magic lies in the fact that their value-add on top of open source is not open source per se, rather it’s their hyperscale data centers running their proprietary code using their proprietary data. By releasing all their products as open source they are essentially trying to commoditize AI. The challenge, however, is the cost. This is what happened with Hotmail, for example—turns out that massive scale, even a 5MB free mailbox adds up to a lot of subsidies.

That’s why we see all the early AI hyperscaler products take one of two approaches: bundling or mostly open source. Those outside the two models are in a sense competing against bundles and against the companies trying to de-monetize the bundles. Those outside are caught in the middle.

The cost of AI, like the cost of mainframe computing to X.25 connectivity (the early pre-TCP/IP protocol for connecting computers over phone lines), literally forces the market to develop an alternative that scales without the massive direct capital.

By all accounts the latest approach with DeepSeek seems to be that. The internet is filled analysts trying to figure out just how much cheaper, how much less data, or how many fewer people were involved. In algorithmic complexity terms, these are all constant factor differences. The fact that DeepSeek runs on commodity, disconnected hardware and is open source is enough of a shot across the bow of the approach to AI hyperscaling that it can be seen as “the way things will go”.

I admit this is all confirmation bias for me. We’ve had a week with DeepSeek, and people are still poring over it. The hyperscalers and Nvidia have massive technology roadmaps. I’m not here for stock predictions at all. All I know for sure is that if history offers any advice to technologists, it’s that core technologies become free / commodities and because of internet distribution and de facto market standardization at many layers that happens sooner with every turn of the crank.

China faced an AI situation not unlike Cisco. Many (including “The Short Case”) are looking at the Nvidia embargo as a driver. The details don’t really matter. They just had different constraints. They had many more engineers to attack the problem than they had data centers to train. They were inevitably going to create a different kind of solution. In fact, I am certain someone somewhere would have. It’s just that, especially in hindsight, China was especially well-positioned.

Image

Kai-Fu Lee recently argued DeepSeek proved that China was destined to out-engineer the US. Nonsense I say. That’s just trash talk. China took an obvious and clever approach that US companies were mostly blind to because of the path that got them to where they are today before AI. DeepSeek is just a wakeup call.

I’m confident many in the US will identify the necessary course corrections. The next Cisco for AI is waiting to be created, I’m sure. If that doesn’t happen then it could also be like browsers ended up which is a big company (or three) will just bundle it for everyone to use. Either way, the commoditization step is upon us.

Get building. Scale out, not up. 🚀

—Steven Sinofsky

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[227. CES 2025: An Abundance of (AI) Experimentation]]>https://hardcoresoftware.learningbyshipping.com/p/227-ces-2025-an-abundance-of-ai-experimentationhttps://hardcoresoftware.learningbyshipping.com/p/227-ces-2025-an-abundance-of-ai-experimentationMon, 13 Jan 2025 03:00:48 GMTThis year’s enormous Consumer Electronics Show proved to be a transition year with many established product lines showing incremental improvements or focusing on B2B areas and new product lines showing a rush to demonstrate their relevancy to AI. It’s still early.

A crowd of people at a convention

Description automatically generated

Let’s get two things out of the way.

First, CES continues to be HUGE. I mean literally the show is just enormous and has sprung back to life post-pandemic. The official numbers were over 140,000 attendees (from over 160 countries) and 4,500 exhibitors (inc 1,500 startups). This year the show spanned the entire Las Vegas Convention Center and Venetian/Sands Expo Center plus countless hotel suites and meeting rooms. I walked 83,000 or so steps which wasn’t a record because stuff was a bit more compressed this year with the LVCC fully open.

Second, despite the name Consumer Electronics Show, the show is not for consumers and more than ever before I feel the show has toned down the idea of direct-to-consumer launches, communications, and booth content. I do think that is a positive, but it also makes the show a bit more difficult to describe and to make seem exciting. In past recent years I have referred to this as “ingredients” or “platforms”, but I think in many ways the shift is complete. It was the iPhone, which is famously not directly represented by Apple at the show and perhaps before that the switch to Yahoo/Google/Facebook as exciting, that began this trend away from CES as a shopping trip to CES as a true deep industry show. Again, this is a net positive. Probably the most broad-based consumer product at the show remains TV sets even with mobile phones in use in every booth and laptops still making news. TV has a special place in the origins of CES. Even home audio has seen a decline and with the “home theater” becoming a far more specialized niche that has seen a decrease in booth space as it has in retail showrooms. While Auto is a huge industry and direct to consumer it always felt a bit out of place at CES and I am glad to see this become more of the technology than “cars” even though that is still a bit weird.

A favorite thing that I see every year are attendees literally shopping and learning that you can’t actually buy anything, and most cool things are non-working prototypes intended to gauge or generate interest in jobbers. For example, there was a super cute 3-in-1 magsafe Apple product charger shaped like a Cybertruck that I watched a half dozen people try to get one unit. The rep handled it all well offering a great price on thousands of units available in 12 weeks. Back in the old says when products were super heavy you could often pay cash and grab a sample at the last days of the show just so they didn’t need to ship things back. Not anymore.

A cell phone on a toy car

Description automatically generated

The biggest structural changes in the show include the following. The presence of the largest tech companies including Google, Meta, Microsoft, is minimal to none really. This year saw a keynote from Nvidia (following past keynotes from AMD and Intel for many years) which is detailed below. The largest booths continue to be the Korean consumer electronics companies, Samsung and LG. The largest Japan booth is Sony but even it is a shell of what it used to be. Thankfully the US and German auto makers have toned down their booths which took up a lot of space and contributed little information, replaced with a wide range of booths covering component (some startups) for autonomy and EV (I won’t cover these here except for brief mentions).

The largest structural change continues to be the combination of country-sponsored booths, and the Eureka Park booths focused on startups. The country booths were historically a bit of a gambit to get the various trade and business sectors of governments to foot some of the bill for renting the space by aggregating all the companies from one country. This worked when it was showing off everything from Austria or Poland. But now nearly every country has a booth which makes understanding what is going on difficult. As an example, nearly every booth from Europe has healthcare tech to show off, much sponsored by the national government. But if you’re trying to get a sense for healthcare in general you find yourself navigating a sea of country booths and context switching between healthcare, smart cities, and AI translation apps for one country and then another. Additionally, some of the country booths are enormous. The mainland China booth occupied half of a large hall with everything from literally USB connector components and neodymium magnets all the way to autonomous taxi drones. At this point the show would be better if CES could solve the booth rental economics while keeping the themed products and companies together. The Eureka booths are also organized some by country and some by themes, but that area is so compact simply walking the booths systematically is exhausting.

A quick word about the world we live in. Prior to the show there was the attack on the Trump building. This amped up security relative to initial plans. It was clear there were some last-minute changes and thus not everything was smooth. Much like post 9/11 COMDEX this was all necessary and warranted but could be frustrating (the Nvidia keynote was quite late to start for example). Additionally, heavy storms over much of the US created havoc with those flying in from the east which includes much of Europe. Attendance on day 1 was thinner as a result, which was great if you made it there. Then on Tuesday (first full day of show) the LA fires started. Suffice it to say many many attendees and exhibiting companies were directly impacted by these. When you bring 140,000 people from around the world physically to one place, a lot can happen. A lot happened this year. 🙏

Experimentation

The major theme was experimentation specifically with AI. What I mean by this is that much of the emphasis on products being shown was showing as much as they could credibly get done between a given company’s recognition of the enormity of AI (specifically GPT/LLM) and the show. In SV a year ago the phrase was “AI wrapper” and many companies have moved on from that view. In the world at large I think we’re still seeing a lot of AI wrappers. This is not a negative and is not only expected but it is necessary. This is literally how innovation happens. The first ideas aren’t always the best, but you have to traverse this idea maze to get to the good ideas.

I can’t stand reviews of the show that talk about something not being useful right away or “trivial” because everything we’re seeing will be seen by someone who will take it to the next level. Even if it isn’t the company seeing it the engineers will move on, and the product people will be influenced. Today when you see something like SharePoint of Notion, one has to appreciate that in the late 1990s the show(s) were filled with collaborate “intranet web sites” that often did what look exactly like these products today. Not only did I see those products but I wrote up my trip report and can draw a straight line of influence to what came to exist (don’t believe me, go read about eRoom as one of a dozen examples).

Themes

I will break down the rest of the report into the following themes:

  • Nvidia Keynote

  • Al

  • Health Wearables

  • Glasses/Headphones/ Smart home / Home

  • Auto / Transport / Drones

  • TVs

  • PC/ Gaming / Laptops

  • Smartphones and

  • Robots

  • Gadgets

I will include photos to show off some neat or interesting demonstrations. There’s no attempt to show off everything or endorse one specific. Often it was just which demo I was able to get a better photo.

But here are 7 BIG things to think about:

  • AI is the new ingredient. But right now, no one is sure what to do with it other than add it to their existing products and that feels awkward though for some products it means they finally work as expected, but will that change things?

  • Screens are everywhere. But right now, there are just too many screens. In a world where people are complaining about screen time, I just don’t know if adding a screen to an oven or a screen to my phone charger make sense. As with the graphical interface itself, there were times when using a GUI just didn’t make sense even though it was the thing to use. My feeling is few will find adding a stationary small screen for either home control or entertainment compelling, especially if it drags in another ecosystem. For better or worse people walk around their house with their phones and even if they don’t the reaction will always be to grab their phone. Plus, the security and privacy implications are real.

  • Health telemetry is here, economical, and far more interesting. Finally. The thing the health telemetry was missing was the ability to seamlessly and continuously in the background measure things non-invasively. Some of us are seeing the benefits with sleep that can come from the right device with the most seamless access. Apple Watch especially with vitals is showing how a small number of regularly measured points can be quite useful. CES 2025 showed we’re on the verge of being able to measure blood pressure and glucose in addition to new measures in ECG, respiration, sleep apnea, and more. These are real data points and when combined with relative change versus precision measurement (“medical grade”) we can see the way health is substantially making progress. I have been kind of skeptical in the past, but innovations are critical mass now.

  • Supply chain is adapting to the world. The supply chain is starting to change in response to geopolitical. For the first time I saw China booths talking openly about factories outside of mainland. I think this is a really big deal if you make things. There is a long way to go, and the end point is unclear. But starting with small things like phone chargers is interesting in a big way.

  • Time from ODM to product. CES used to be about products you could see in a prototype stage one year that would be available in a year. Now the ODM (the factory) is prototyping and showing proof of concept while at the same time they’ve already started selling early versions to Temu and other nameless OEMs you can see on Amazon or Instagram ads. I had to really struggle to find anything that I had not yet been pushed an ad for on those platforms. Kind of incredible.

  • Home appliances, security, automation have made a ton of progress over the past few years but now seem to have plateaued. The rise of AI might prove more of a distraction when we still have a ton of last mile work.

  • Data and privacy are at a critical mass of invasion. Cameras, locator beacons, every device phoning home, and more are at a critical point in terms of privacy. CES shows the huge interest from governments in face recognition, cameras, and location data. Ironically the self-proclaimed guardians of privacy in the EU give governments themselves essentially a free pass on these issues. I personally worry deeply about a) presumed guilt and the role all this data plays in exposing innocent people to get caught up in the system, and b) the way the court system works and how everything becomes evidence. I won’t have more to say in this report, but I could not help but see almost every device and sensor through the lens of the loss of domain and the presumption of guilt.

What do I mean by this last one? Every year there is some booth that just triggers me with respect to privacy. Here was this year’s. Talk about dystopian.

Just a reminder, I’m just one person walking the floor. I don’t see everything no matter how systematic I might be trying to. In particular, I can’t compete with a web site that dispatches a dozen reporters and videographers. Just me. And 35 or so years of going.

Also, I don’t include links because I don’t want anyone to think I’m endorsing or monetizing this in any way. Do your own research ;-)

Nvidia Keynote

To say the Nvidia keynote was the big event would understate things. This is super weird for me to say since I was there when Bill Gates keynoted the conference and even announced Xbox (along with Tablet PC and Windows 2000). Microsoft was also the largest market cap company at the time (and also under intense antitrust scrutiny) and the darling of Wall Street. While waiting in line at the event to enter I even saw long-time CTA CEO Gary Shapiro (owners of the show) and had a quick word about this compared to those old keynotes before he showed me the shortcut to get in. The excitement, energy, and scale of this keynote dwarfed anything I could recall. It was at a scale we saw for some Windows events in China where getting large crowds wasn’t ever a problem. Estimates were 10-12,000 people at the keynote held in the Mandalay Bay theater. It was pure insanity to think this was happening because of…graphics cards ;-)

A large stadium filled with people

Description automatically generated

Jensen was Jensen and brought with him a depth of technical knowledge, industry experience, and founder credibility that you just have to love and experience. He is the leading tech CEO right now in terms of innovation, execution, and excitement across hardware, software, consumer and enterprise. Nothing compares.

There were several important announcements in his presentation which was just him on stage talking and demonstrating. No net.

By far the biggest announce was NVIDIA Project DIGITS. DIGITS builds on the immediate product announcement of the Blackwell net generation chipset/software for AI and graphics (below). With the GB10 Grace Blackwell Superchip, Project DIGITS delivers a petaflop of AI performance in a power-efficient, compact form factor. It is basically a Mac Mini formfactor AI supercomputer. The compute platform consists of the Blackwell chipset along with a CPU provided by MediaTek. Here’s a shot from the web site which is clearer than what I could capture showing it off:

NVIDIA Project Digits

The key with this device is that you can train and infer models on one device at some scale. It is impressive. It runs a Linux OS (DGX OS) with Nvidia CUDA baked in running with 128GB of unified, coherent memory and up to 4TB of NVMe storage. It is a beast in a tiny package. The idea is for this to be the dev platform which then can seamlessly scale to Nvidia DGX Cloud instances. By the buzz walking out this is what everyone wanted NOW. Gamers were wondering if you could run Windows on it :-;

That’s the interesting part to me of course. It is entirely accessible to Windows/Mac developers via a remoting protocol or as a server. The processor it is running is from MediaTek. The Nvidia GB10 brings NVIDIA Grace CPU, which includes 20 power-efficient cores built with the Arm architecture combined with Blackwell. Jensen showed off a wafer to illustrate how much is jammed into the chip. There’s a massive amount of manufacturing and design prowess to deliver all this reliably.

A person holding a shield

Description automatically generated

My sense is this was the first step in Nvidia delving back into the CPU product line in a general-purpose platform (v autos). I have no insider roadmap information at all but can say we made a bet for the original ARM Surface for Windows on Nvidia precisely because the lead in GPUs mattered so much. We tried to create a new class of device when we announced that in 2011 CES (that’s how Jensen described it then). The industry would be greatly served by Nvidia offering the combination of technologies that could be tapped for general purpose compute. Qualcomm is not even close on the compute that matters for AI IMO. It might be that the consumer will benefit indirectly if this platform is just used by developers and data centers, but my strongly held view is that the devices we carry will end up doing much of the work we think of today as hyperscalers or cloud, so we need this to happen.

The exciting news for Blackwell was really about software and it shows the enormous lead Nvidia has as both a hardware and software platform. The Blackwell chipset of course brings a massive amount of compute power, but much of the benefit happens through the use of execution of AI models for graphics rendering. The use of AI dramatically reduces the real time compute needs and replaces those with generated/inferred frames and pixels. It essentially takes shading and rendering to a new level of software generation, a new level of abstraction.

A person standing next to a large screen

Description automatically generated

The result is that Blackwell is $1000 cheaper! And there is a laptop version that was announced with partnership of many OEMs.

A person standing on a stage

Description automatically generated

Nvidia under Jensen’s leadership for two decades has an incredible ability to invest for the long term, try new things, and take risks. For those in the software world it is sometimes easy to forget how the hardware world takes 3 years of investing to see new products. It brings a whole new meaning to agile management and Jensen is the very best at this.

The DIGITS announcement is the biggest and most strategically interesting thing I believe we will see in 2025. It is too easy to look at it as a simply a supercomputer. CES can make people do that. But here’s what I think makes is strategically the most interesting of announcements:

  • Whitespace. The product is industry whitespace. The industry is focused on the edge with some custom silicon like Apple’s NPU (very capable), but no one is doing what amounts to a full AI-capable computer on “the desktop” this way. This has huge implications for development, particularly cost of development.

  • APIs. The key to this development is a consistent set of APIs on desktop and at hyperscale. While many models can run locally today (the open-source ones for example) the unique ability for developers to tap into the platform at these different scale points can solidify the Nvidia software stack.

  • Models. Underlying this Nvidia is taking steps to make sure the open-source models work effectively with their APIs so the silicon optimizations shine. Historically, hardware makers have embraced open source in order to tap into their proprietary hardware when the community in general does not do that. In this case, Nvidia has a huge decade’s interest and support for its evolving platform which makes this more than a wish but a strategic point.

  • CPU. While the CPU and general-purpose nature of this version one “project” were limited, it continues to push ARM at both the edge device and the cloud level. I don’t think this is the last we will hear about Nvidia CPUs.

  • The hyperscalers are focused on cloud consumption. All the startups are feeling the financial aspects of this right now. And as consumers the idea of subscribing will soon show itself to be kind of a crazy waystop as any AI apps we use will have to build in the very same subscription (Apple worked around this with the pre-signin and jump to GPT). At the enterprise level, it seems highly unlikely that an enterprise with 50,000 employees can utilize a variable cost SaaS product to streamline anything when doing so means unbounded/unbudgetable costs. Without writing a whole essay on this, it all feels very unstable much like the early days of the centralized scaled Cable TV companies or early smartphones and the carriers thinking they could charge everyone for everything on the “world wide web” as gatekeepers.

If you’re excited about AI and how it evolves and (like me) also think the current foundation model/chat hype cycle is seeing its limits, then I think what Nvidia showed off is the precursor to exciting things to come from developers who tap into the architecture enabled by these solutions.

This was just so fantastic to see.

AI

Everyone at CES got the memo. AI is the most important thing. This was like CES 1998 when everyone got the memo “Windows and the Internet”. If there hadn’t been a dramatic move away from swag and paper/cardboard, then there would have been “AI HERE” table toppers and stickers all over the show. This is important because not only is AI important and has the potential to deliver a new wave of innovation, but there was also a need to do new things. Much of the industry has been caught in a “what comes next” lull and with the exception of the ongoing iteration in healthcare (below) and the bright spots in robots and XR we haven’t seen much and definitely have not seen much with any broad deployment, perhaps except Apple Watch for health and some other health devices seeing moderate take-up in the larger/wealthy markets.

So, there is much optimism and excitement.

However, that optimism and excitement was met with mostly just marketing. Try as I might, I just didn’t find any product concepts/experiments that were really about AI and all that interesting. The way I came to think about this was as follows.

Imagine if available to everyone were the following. Your product could, with almost no work:

  • Convert any spoken language into text

  • Convert any text into speech

  • Convert any handwriting into text

  • Translate between any languages

  • Recognize any object in a scene, including a specific person

  • Describe any scene

  • Answer any question with a well written (or spoken) answer

  • Create a video persona doing any of the above

There might be some more, but you get the point. The models today can do all those things. So, what we were able to see on the show floor were all the existing products one could imagine (and have seen previously) “hooking up” these capabilities.

For example, CES has always had translator apps. These go way back to the origins of CES and attendees needing to talk to each other. Maybe the companies are different today, but the scenario is the same and the products are vastly improved. At the same time, everyone is just using their phone and widely available apps or even native foundation models.

CES has had eInk notetaking devices for ages. eInk is a perennial tech waiting to break out in scale. Kindle finally got traction but is still not a mass scale device. Color eInk is finally hear and can be used for readers and other novel products like low power digital signage. Now the readers can have written ink and with the models this ink can be recognized and turned into text. Personally, this is incredibly exciting because the original Microsoft research was founded with a handwriting recognition lab…in 1992 or so :-; But again, this is just the same scenario only now it mostly works.

A tablet on a table

Description automatically generated

Security cameras have been a mainstay at the show since China really started attending in force. The cameras moved from analog to digital, from wired to Wi-Fi, from daytime to night, and so on. Now they all have object recognition and scene description. There are many fancy features such as recognizing different animals or detecting anomalous scenes and being able to alert the operator directly. Still, these are security cameras, but AI enabled. Just need to hope the summary and ID are accurate (see photo).

A screen shot of a computer

Description automatically generated

Below is an AI assistant that is a generated persona that interacts via speech and voice. It can translate and so all sorts of stuff by stitching together what are all the pieces available to everyone today. The question is really the last mile.

A sign with a person on it

Description automatically generated

The thing about where we are with AI is that we’re two+ years post the big rise of chat. The main problem chat needed to solve that went beyond what other models solved (translation, recognition) was to be truthful. Stitching all the pieces together is not solving this core technology problem. Of course, this can be and will be addressed but to do so is likely going to revisit much of what made chat great (the creative side) and likely will diminish the role of the foundation models. In other words, what is so important today will be far less important parts of the eventual products. This upends the tech stack and the leaders in the process.

Pushing against this is the relentless cycle of what comes next. From my perspective, we did not come to address truth but have leaped to “AGI is now solvable”. That’s a big leap since presumably AGI requires some level of truth. And many have taken to proclaiming the agentive future. But this skips past both truth and AGI. The vocabulary and expectations are racing far ahead of what is being delivered.

Returning to the above AI assistant. I have 0% doubt it can address basic Q&A say at a theme park or a mall, assuming scoping of the results. The problem is the rush to think this solves all sorts of service problems for the enterprise. As we all know, at least anyone who has written line of business software, the problem is never the known case but the exceptions. The models and frankly AI in general are not equipped to handle exceptions. So, for me the leap to agents and autonomous “do this work for me” seems like a promise that can’t be kept. This makes me uncomfortable. My challenge will be that the demos will be great. But I suspect that the inputs to agents are very quickly going to look like programming more than like asking a trained human for help. Or worse it will be the voice response version of a call agent who is not empowered to make things right but to only follow the flow chart.

It is important not to read this as negative. It is a state of where we are. I get how transitions happen and more than anything I understand what it is like to have negative vibes shouted from the cheap seats. What I am looking forward to more than anything are new products that use AI to do things that actually work better. The web disintermediated a vast array of physical and human barriers to getting to information, products, and services. That’s what is going to happen but what is disintermediated, how, and when is still not obvious. It is obvious to people taking risks and doing experiments. Those aren’t always the booths at CES.

The flip side of all this is the over-the-top AI marketing. The major CE companies all got the memo. They put AI in front of everything!

Note, these next sections will be much briefer because of the overwhelming import of AI and the amount I covered it.

Health/Wearables

After years of skeptical coverage of health and wearables, I am feeling more optimistic than ever. I give Apple a ton of credit for moving slowly but deliberately in the space as that has created a foundation upon which the ecosystem can work. If you’re into health metrics and are bought into the Apple ecosystem, the Health app and third-party ecosystem really does offer a “whole new world” even if US people include Epic.

There was a good deal AI marketing. The biggest example was the Withings “body scanner” which was really the existing Withings scale (latest body composition model) with a full body scanner mirror offering an AI analysis and a televisit. But really it was a prototype leveraging their existing and quite good scales and other devices. While they offer a service (and subscription) they also integrate with Apple Health which to me shows the way towards progress while also building on trust and privacy.

A person standing in front of a mirror

Description automatically generated

Blood Glucose monitors are a perennial showing (going way back) and have just struggled to find a non-invasive measurement. Many reading this have finally been able to tap into continuous glucose monitors (CGM) with Dexcom through Levels or a doctor that are now OTC. These have pros and cons. Finger stick is still the gold standard but for building body telemetry is problematic. Most of the innovation is happening outside the US (Japan and Korea) where the regulatory framework is more permissive. Ortiv is a new kind of fingerstick approach that uses a laser to get the sample. Many continue to work on using various wavelengths of light and laser to measure—these require AI to analyze the reflections on the sensor. There was a near-field IR this year that might be promising but might not fit in a watch.

A device in a box

Description automatically generated

This is another non-invasive glucose monitor from a company based in Seattle. It uses nearfield IR. I watched it work. So, I am hopeful.

This was a cool product I loved. It is a GPS tracker built into insoles. It is aimed at either the elderly or kids. The benefit here is that it isn’t highly visible and stays out of the way while kids are playing. In general, the AirTag problem is that any bad actors know to remove them from people or objects, so we need more way to disguise those.

A tablet and pair of blue socks

Description automatically generated

Like glucose levels, measuring blood pressure has long been without a continuous mobile solution. Many things have been tried but so far, no luck. Novosound is a UK company specializing in ultrasound. They have developed an ultrasound sensor that can be part of a watch/band and measure BP. I spoke with a member of the team building the ultrasound part (versus the full device) who seemed particularly optimistic that it can be deployed in a consumer-friendly approach. Having continuous blood pressure would be a huge breakthrough for both healthy and others. There’s very little data on how BP various over a normal day and having data for an individual relative to day after day would be HUGE. Below is the ultrasound probe that could go in a band (in her hand is a stand-alone prototype with battery).

A sign with text and images on it

Description automatically generated

If you have not gotten a walking pad for home, then you’re missing out. If you had a treadmill and couldn’t move it out of your basement or up the stairs then get a walking pad! These are amazing for those of us in a rainy city. They are incredibly cheap and easily carried. Amazon is filled with them as is my Instagram feed.

A group of colorful objects on display

Description automatically generated with medium confidence

While much more about AI than health, one area that many are super bullish on is AI reading any sort of medical images. Living with a person who reads images for a living I am less bullish on this area. The training data is skewed towards problems and even though there are many stories of AI picking up false negatives the false positives have a real cost. This is longer topic, and I don’t want anyone to jump to conclusions either way. That said, for mass scale screening there’s a huge potential since the alternative is no imaging at all. Scanning for glaucoma, diabetes, or MD in eyes is a huge upside. There were some early attempts connecting imaging sensors to mobile phones, but a Korean company has done that and combined the images with an AI model. I did the scan and in 30 seconds got no false positives. Super cool. To those that know, there was not even an air puff for Glaucoma.

A computer screen showing a screenshot of a medical image

Description automatically generated with medium confidence

Glasses/Headphones/Wearables

There has been a ton of excitement this year from the work by Meta on its glasses after the surprisingly excellent Apple Vision Pro that is still searching for a scenario and practical embodiment. There’s a decided shift from AR/VR though on the floor to XR and making glasses as much like glasses as possible. I am not expert in the latest in the field, so it is super difficult for me to sift through what seem like a very large number of OEM glasses of pretty low quality and a lot of very specific components in search of buyers. That said every booth featuring glasses was crowded and in a post-covid world everyone was happy to try them on. The Luxotica/Ray-Ban booth with Meta glasses was very heavily trafficked.

This is a set of glasses specifically designed to just watch immersive Google TV:

A person wearing a mask and holding a remote control

Description automatically generated

There is a glut of new products offering basically wireless in-ear headphones with a charge case and app but touted as hearing aids. While my hearing is declining, I don’t yet use assistance, so my first party experience is limited. It is clear in the US with recent FDA changes and particularly Apple’s latest Air Pod Pro and iOS releases we are collectively on the brink of major reductions in cost and broad increases in access to hearing assistance. There were dozens of booths, of varying regulatory oversight, showing off headphones. They generally looked like AirPods in form factor. Often, they came with apps connected to transcription or video captioning tools.

There were quite a few wearable rings. It is tough to gauge the quality of the sensors and the reliance on a siloed app can be tricky. Some do integrate with Apple Health, but the Chinese origin app introduces risk for data leakage.

A group of rings on display

Description automatically generated

Smart home / Home Automation / Pets

Home automation this year was a bit of a letdown. This is a category of multiple “gatekeepers” including the legacy infrastructure of homes (switches, garages, doors), connectivity (zigbee, wifi, etc.) and the mobile platforms (that control the connectivity and gathering of devices into a “home” for permissions and access. I love this stuff, and our house is a lab. That said, this year didn’t show a lot of progress. There were fewer companies showing off the standard suite of “alarm” products (doorbell, door sensor, flood sensor, etc.). I think that is good.

Door locks made a lot of progress over the past year with some Apple Home Key locks making it to market finally (bought s Schlage essentially off eBay because you couldn’t get one at Home Depot). But Home Key is slow to deploy fully (no sharing). In some ways the locks are going backwards. The thing this year was to add face recognition to the lock. This is driven by Android primarily in China (Philips is just the distributor below). I could not get over the scale of these locks. Somehow, they turned an iPhone Face ID notch into an 18” gianormous lock. Pass.

A white box with black buttons and a black and white box with black text

Description automatically generated

Amazon and Kidde announced a CO + Smoke detector which is welcome for the Ring family. Unfortunately for something that most people already find ugly and want to hide, this product looks literally like a CO sensor glued on top of a smoke alarm. So, it kind of went the wrong direction.

A smoke detector on a wall

Description automatically generated

Animals play a huge role in the Home Automation side of CES. The market for people buying stuff for pets is enormous. And it is full of love. I counted over a dozen automated cat feeders and self-cleaning liter boxes. I have a ton of friends for whom these are life savers. Our boys will not go anywhere near the enclosed litter boxes, and they refuse to eat anything that they do not see us put in the bowl (“must be fresh” they meow). They are not spoiled at all.

Birdfy is a wonderfully fun outdoor bird feeder + camera. I had one for a brief time and loved the photos, but the California squirrels attacked it en masse and tore it down from the post. They introduced several new models for more sedate areas. I love the photos. The hummingbird feeder is going on our terrace in Seattle where in the spring we get visitors many floors up in the urban setting.

A bird feeder in a building

Description automatically generated

The folks at Pawport spent a lot of energy designing a dog/cat door that works super well. It is an industrial strength door controlled either via a collar worn by the pet (proximity opens / closes it on a schedule) or by an app manually. The door itself is literally bullet proof and you can even install one on each side of the door for security/climate needs. It is a great execution.

A display case with a screen showing a room

Description automatically generated

Back to the kittens. We’re completely obsessed with the home gardens. These are multiplying. CES has many industrial scale gardens from Japan on display and there are an increasing number of home gardens, especially for urban customers. These aren’t just for pot anymore. They all have proprietary pods with fertilizer. We grow all sorts of stuff including kitten grass. This one is called the Plantaform Smart Indoor Garden and is powered by NASA technology! Below that is a much larger popup greenhouse sized version.

A white box with plants inside

Description automatically generated

Auto / Transport / Drones

As mentioned, I am glad the “let’s just turn CES into the west coast Auto Show” has been toned down. There seemed to be little new in general across the whole of auto and transport. The booth that seemed to get all the attention was a crazy Cybertruck-looking EV with six wheels that could also contain a dual seat autonomous passenger drone also EV. None of it seemed remotely feasible in reality and was not unlike Little Nellie in some ways. IYKYK.

A group of people standing in front of a white vehicle

Description automatically generated

Here’s another one of these drone models that seems unlikely.

A helicopter in a building

Description automatically generated

There’s no shortage of drones in market but the market has shifted to professional. This HoverAir X1 has an 8K camera and vision recognition so it can track you while doing adventure sports or something. It is a pretty cool form factor, and the camera quality was high. You can also pilot it manually with a phone and cradle.

A hand holding a drone

Description automatically generated

Honda unveiled a prototype EV line called Honda 9. There was a passenger car and medium SUV. They seemed greatly influenced by Tesla to me. There was also a Honda/Sony prototype that featured a PS5 built into the car.

A collage of a white car

Description automatically generated

Past years saw an explosion in scooters and other micromobility solutions. The incremental innovation happening in this space is really in pedal assisted bikes. In this case the innovation is on the whole negative in my view. The bikes are getting enormous, heavy, fast, and unwieldy. Living in a hilly city I routinely see these bikes flying at car-speed down hills outside the bike lanes and essentially out of control. They have a stopping distance equivalent to a car, but the riders treat them like bicycles. They exist by threading the needle of class II and class III vehicles. Even the most legitimate vendors like Segway who follow the rules are saying that particular models are exclusively for off road use. In an urban setting where traffic and open drug use laws are not enforced, as well as any proper use of bike lanes, it is no surprise they will count on the non-enforcement of e-bike classification as well. These are most decidedly a negative.

A motorcycle on display in a building

Description automatically generated

On the other hand, much more preferred to over-powered and impossible to stop motorcycles with pedals I would love to see more potential for urban use of these neighborhood class EVs. I don’t hold out hope in the US but boy these are great.

A red and black car

Description automatically generated

TVs

TVs are always fun, and it is difficult to resist shopping. AI made its way into TV with lots of claimed use of AI for audio and video processing that I am pretty sure everyone reading this disables.

Google TV was there in force driving its offering, as was Roku. The primary focus was on the software side. Most of the major TVs have the Google TV app but TCL and Sony build in the software as the OS for the TV while LG, Samsung, and others have their own OS. I’ve tried both and found them all frustrating. There’s no reason to have a box requiring another plug (and potential internet connection) and wish this would just work out. Obviously, I get the complexities. At the other extreme China maker ChongHong simply offers TVs with all the different operating systems:

A display of a television

Description automatically generated with medium confidence

The primary “news” in TVs is just how big they are getting and how economical those huge ones can be. TCL is the low priced-big screen leader with 100” + mini-LED. These are beasts of course and you’re not really wall mounting them or even putting them on typical floors. But they also sell for $2000 which is kind of insane since 3 years ago the size was 100,000 custom order.

LG was showing off a transparent TV that has a screen that rises up to make it a regular TV. It is super cool, but I don’t quite get what it does. It is kind of a neat room divider. It costs like 20,000 US but is available.

A screen with icons on it

Description automatically generated

For big screens the laser projector seems on the edge of uptake from pocket to 12” from wall to 160”. There was a Sony previously that the screen rose up from some sort of furniture though that model isn’t around HiSense has picked up that form factor. The inconvenience of a screen remains but the throw distance of 12” is kind of wild.

OLED is pushing down to next tier of OEM and the battle is keeping the mini led price point up for another cycle while the micro led ramps. But micro is super nice and goes to 163”. Behind the scenes is an IP battle between East Asia countries. I suspect we will start to see OLED in computer monitors as well.

There seems to be a big push to have the “art display” TVs. That is TVs with an idle screen showing photos or famous works of art. This might be an Asian market interest since in the US we tend to prefer “HDMI 1 No Signal” as the screen saver. A disproportionate amount of floor space seemed to go to these art TVs. One maker was pushing the subscription that was offered with the tv as well. Below is a whole exhibit of focused on Art TV

A wall with pictures on it

Description automatically generated

At the extreme low end there was a super tiny “tri fold” project that looks exactly like a tri-fold charger but is a 60” HD projector. I’ve always loved these but never know where to use one. They are amazing to see.

A device on a table

Description automatically generated

PC/ Gaming / Laptops

At the start of the show Dell announced a renaming/rebranding of the laptop line though they did not have a public show presence. Lenovo had their off-floor demo stations showing a laptop with a second screen that expands from the main screen. Sony was showing an accessory screen for their laptops. In general, these 11-15” OLED panels you can see (depending on feed) on Instagram ads endlessly are all over the place. They aren’t new, but the price has dropped to incredibly low levels, perhaps $100 on Amazon already.

A group of laptops on a table

Description automatically generated

I just don’t see more innovation in laptops happening any time soon. The industry is stuck waiting on Intel and the move to ARM doesn’t make any sense for the work involved. We’ll see what happens. I am sure we will see some new models that come with discrete Nvidia chips, but I am far more bullish on DIGITS filling the white space for innovation in laptops given it can break from the Intel dependency and deliver a unique and unmet need.

To that end there is a lot going on with “mini-PC” which are Linux/Windows devices in the Mac Mini form factor. Many tech enthusiasts have played with these. They are (also) readily available on Amazon for pretty low prices. Right now, they have terrible thermals, and all use big power bricks but perhaps that will change. Below are Mini PCs from MSI (the mobo maker) that have Intel processors. These have dual multigigabit net adapters which must meet a need I do not understand.

A group of black electronic devices on a table

Description automatically generated

There are also mini-PCs running new ARM chips from Qualcomm which means they come with a promise of power. These can run Windows 11, or they can run Linux. They run Snapdragon Elite 1000 with 45 TOPs (by comparison Nvidia says DIGITS has like 4000 TOPS). Geekom showed an ARM mini as well as an AMD mini:

A computer device on a table

Description automatically generated

AMD which has been on a roll had a great partner booth with a ton of gaming machines and AI compute machines. These were kind of insane builds at the high end. At the low end there were some mini towers. These are high specified but are still pretty hot and had a good hot wind blowing out the back.

A black rectangular object next to a computer screen

Description automatically generated

My favorite PC booth for the past few years has been Razer. They’ve done a great job building around a community and focusing on tech elite gamers. Each year they have a “project” which is an early prototype of a new product. This year’s project (Arielle) was a gamer chair that had a fanless dyson fan in the seat and back. It was pretty…cool. (sorry)

A computer chair and a monitor

Description automatically generated

The big announce from Razer was a proprietary extension to the moonlight/sunshine remote game player protocol/app to enable remote gaming. Lots of wow factor if you want to play games on your home rig from work (assuming you can get the drivers on your PC). The work they did was to do adaptive video size though this means your home PC is offline to local console use. Here’s playing a game on a phone.

A hand holding a video game controller

Description automatically generated

In a similar note, Intel was showing how to share trackpad/keyboard and operate across two PCs. I tried to make a video, but I got so confused as to what I was doing. Still, it is a wild PC trick though I would not recommend every downloading the drivers from anywhere and installing them unless you want malware to mainline your PCI bus over thunderbolt. It is called Thunderbolt Share and described as:

Thunderbolt™ Share unlocks ultra-fast PC-to-PC connectivity experiences that fundamentally change the way creators, gamers, consumers, and business users interact with two PCs. Its intuitive easy-to-use interface allows for more productive workflows, optimizing space and enhancing overall performance, all with two PCs equipped with industry-leading Thunderbolt technology.

The software enables users to easily and securely share screens, keyboard, mouse, and storage at the speed of Thunderbolt technology, offering exceptionally responsive screen sharing, ultra fast file transfers with simple drag-and-drop functionality, folder synchronization, and easy file migration from an old PC to a new PC.

A couple of laptops on a table

Description automatically generated

A fun dock with apps and a clock and more:

A digital clock on a table

Description automatically generated

Smartphones and Accessories

There was very little by way of smartphones. There continue to be a number of supply chain based ODMs making phones and tablets and many of these become the point of sale or delivery person devices we see in everyday use.

The accessory of the show—meaning the thing you saw everywhere and see on Instagram ads—is the tri-fold charger. It is just a travel charger that does Qi/Magsafe charging for phone, apple watch, Air Pods. One of the ODMs told me it is the best-selling charger and he’s getting requests from all his customers. Some of these are stationary made of metal and others fold up and are fabric. I don’t like these as much as the 2 in 1 that have a ring since I can easily charge and watch a movie on a flight and in the hotel, I don’t need to charge all three devices at once. It saves on weight. For the ultimate I combine this with a 65w adapter and a dual output cable and can charge MacBook as well.

A group of electronic devices on display

Description automatically generated

I like these chargers with retractable USB-C.

A group of electrical devices on a shelf

Description automatically generated

The three companies doing great accessories are J5, Rolling Square, and Anker. They push the ODMs and add value in their designs. J5 is doing some great laptop docking stations. Rolling Square has interesting FindMy accessories and the best magsafe connected to laptop for video gizmo. Anker has the best stand-alone chargers. I will just caveat that last point with saying that Anker has gone crazy adding LED screens to chargers, and I don’t think we need that. One desktop multi-device charger has a whole “Tools Options” and a knob/button to pick how many watts go to which output. 🤔

Here’s a J5 dock that was super nice with a magsafe on top:

A square device with a round button on a clear surface

Description automatically generated

Here’s an Anker with a screen and also a super nice cable from Anker.

A close up of a cable

Description automatically generated

If you need a travel adapter (I have not needed one in 10 years but just in case) this one has done well for me and now has a 65-watt PD and keep in my go bag.

A red box with a white object on it

Description automatically generated

This is a Shenzhen ODM that is doing great work. I had a great conversation with the engineer who noticed I was using his battery case. He was telling me about their work at pushing FIndMy. Here are three prototypes they are selling to OEMs these days which you can soon see on Instagram ads I bet and Amazon. Here’s a passport wallet with FindMy, a TSA lock with FindMy, and a super cool FindMy built into a mini-flashlight keychain. We talked about the need to disguise FindMy devices and how using the Apple AirTags for luggage is problematic.

A collage of electronic devices

Description automatically generated

He also told me that the US market hates this kind of charger, and the lack of success surprised him.

A white square device with a cord attached to it

Description automatically generated

This looks like the display at the Apple Store

A poster of a group of usb ports

Description automatically generated

Robots

There were a lot of robots. The big new thing this year are more humanoid form factors. Many of the ones on the floor were non-operational and several times I saw the developers anxiously trying to make them work. The robots fell into these classes:

  • Arms. There were many arms for moving things around from a stationary point. The primary scenario is for assembly and the factor they all touted were the cameras and ability to quickly learn what the steps were and how to correct for misaligned parts and so on. These require a lot of work to put to use as one would expect.

  • House cleaning. Still seeing a lot of Roomba clones. The big news was one that combined the mobile vacuum with an articulating arm on board to move objects out of the way. Seems like a pet fight waiting to happen.

  • Warehouse assist. Particularly in Asia there were quite a few load carrying robots, carts with wheels, to move things around a warehouse. These were all autonomous and had cameras for obstacle avoidance and so on. These are all fantastic.

  • Human assist. In the health area there were a few human assist robots. Two were full mobility assist for those in wheelchairs. Then there were several for legs and arms essentially for therapy or temporary use. A photo of one is below.

  • Humanoid. These are like the Tesla robot. I’m pretty bullish on these and expect Tesla to do great because they are focused on building a platform and also in parallel using them in the factory. I believe that feedback loop is critical (much like Space X and Star Link, or Windows and Office). I think the ones on the floor were mostly for show and not on a product roadmap.

A mannequin wearing a pair of knee braces

Description automatically generated

Gadgets

Every year I lose track of all the gadgets I see. I took over 300 photos this year. Crazy!

By far my favorite was also the biggest. It isn’t a gadget but a serious emergency response “kit”. It is MBESS – mobile battery energy storage system – and is built in the US in Baltimore. It is a trailer that contains 90kwh of batteries in a trailer that is certified for emergency response. The power can be used in place of a generator which for fire areas is really important (diesel is risky). Here’s the cool part. To be used in an emergency setting it needs to be in communication even when it no longer works. Generators carry backup batteries for this. This device has a battery backup too, but it also has the solar panel to use to power comms and telemetry even when the main battery goes down. It can be charged by power mains or by an EV charger. Conversely it can charge an EV. The capacity is about one Tesla worth. People get very confused by the panels which would take about 2 weeks to charge the battery. That’s also why that BS marketing phrase “solar generator” is so dumb. It can also be used for any remote power needs (they told me “weddings” and then I had questions). Super cool and MADE IN THE USA.

A person standing next to a trailer

Description automatically generated

Toothbrushes seemed very big this year.

A display of electric toothbrushes

Description automatically generated

These Dyson clones were ever-present in the Shenzhen section.

A vacuum cleaner next to a wall

Description automatically generated

Have you seen the ads on IG/Amazon for “powerful air blower” to use in your car or for your keyboard? These are bladeless USB-C powered / battery operated fans, and the airflow goes either way. There were dozens of booths with them. This one took it to a whole next level, like a Ryobi line, and added lights, blowers, heaters, brushes, and more to the same basic engine.

A group of medical equipment on a table

Description automatically generated

There was a big show presence for IoT which was heavily industrial. I wanted to end by highlighting this product from Blues, a Boston-based company making an entire IoT platform. The thing is this isn’t just any startup trying to platformize IoT (there have been many) this one was founded by Ray Ozzie (Notes, Groove, Talko, Microsoft, legend). This is really an IoT platform done right. They have developed this as a secure platform where the device talks to their service and the hardware side is a modular kit that provides varying levels of connectivity and sensors. The thing that scares me about IoT is the same as what scares me about home automation devices which is I can’t stand thinking about all the code bases sending data and all the clouds and all the exposed surface area. This is the right solution to this, and the industry should have a platform winner here. Way to go Zak (who I talked to used to work on Windows) and Ray!

A cell phone on a table

Description automatically generated

If you made it this far please feel free to share this. Also love any questions, comments, or pointers on what I missed (or maybe just missed writing about). Thank you for your support.

Subscribe now

Oh, and a final fun note, the Bill and Melinda Gates Foundation had a booth, and it was running a Surface Table.

A display of a large screen

Description automatically generated
]]>
<![CDATA[Remembering Mike Maples, Sr. ]]>https://hardcoresoftware.learningbyshipping.com/p/remembering-mike-maples-srhttps://hardcoresoftware.learningbyshipping.com/p/remembering-mike-maples-srFri, 10 Jan 2025 17:36:04 GMTThe computer industry lost a legendary executive this week with the passing of Mike Maples, Sr. He was a mentor, leader, and role model to a generation of Microsoft employees, including me. By any account, Mike created the product culture that was critical to scaling and executing what became the modern Microsoft admired by so many.

Image

Mike joined Microsoft in 1988. He was an unexpected hire to many, as he came from IBM. He was hired at the most senior level in the company, back when the company had few executives and had not done particularly well at outside hires. Microsoft had a strong business partnership but challenging joint development program with IBM during which the combination was building the OS/2 operating system. As a joint project it was culturally difficult to say the least. He was hired to run the Applications Division, Apps, which at the time was primarily seeing success on Macintosh representing almost half of Microsoft revenue and profit. Microsoft’s first-party applications were distant number 3 or worse. At the time, Windows was mostly an idea, that took forever to ship. Still, the Apps strategy was confused between DOS, Windows, OS/2, and Macintosh. The team was not particularly organized for success as a combination of product units and job functions.

Through a series of organizational changes as well as some key acquisitions like Forethought/PowerPoint, Mike led the transformation of Apps into an execution machine. What was incredible about how Mike led was that these results were achieved by incredibly difficult work—pioneering work—by many people doing amazing things with little chaos or randomness as we used to say. All along, Mike created a culture of respect, collaboration, and most of all “promise and deliver”. He instilled in everyone the most basic notion of saying what you’re going to do and doing that well. It was during these early and formative days of the Apps business that Mike coached the team to learn and improve from mistakes, such as the famous “Zero Defects” memo on quality. With a history of all of Microsoft’s products spinning out of control and shipping years later than promised, internally and externally, Excel 3.0 shipped a mere 11 days later than its originally planned date, an unheard-of accomplishment. It wasn’t simply Mike’s direct efforts. There were amazing people assembled on the product team and leading the Excel product unit, Mike’s creation. It was Mike's cultural leadership, his touchstone. Mike challenged the team and coached them. The team delivered. And Apps as more than an org, but a culture, was born. Mike was Apps.

If you visited the Microsoft campus you would have noticed in the Apps courtyard special bronzed tiles with the names of products and the dates they shipped. This was part of a culture Mike created, the Ship It awards. Everyone received a weighty acrylic block commemorating the accomplishment of shipping in an era when that was nearly impossible. After each product shipped, a tile was added to the courtyard and we each got a mini tile to affix to our personal Ship It Award. Mike instilled the idea of shipping across the company.

Some very smart people in marketing had an idea for delivering both individually successful apps of Word and Excel to customers that seemed to be buying only one of the Apps. An earlier idea for DOS Apps did not do well in market but the seed was planted. The early results from Macintosh Office were so successful it was clear there would be a pivot to a suite of products and not just one. Mike wrote a key memo outlining both the customer strategy and the competitive advantage that came from a focus on the newly birthed Office product. It became something of a stump speech he delivered. In the technically and technology-driven Microsoft culture, Mike was among the first to bring customer and market strategy to the forefront.

At the same time, Windows 3.0 was close to completion and was looking like it would be an excellent product. Windows Office 1.0 shipped just months after Windows 3.0. What Mike did was lead us to strategic clarity. I was just a developer in what would become C++ trying to figure out who we were building tools for—Windows, Mac, OS/2, DOS, NT—we just had no idea. I vividly remember the day Mike brought us all clarity. Windows was our focus, though Macintosh remained “sim ship” for quite some time. While Bill Gates had written clearly that “Windows was a platform for the 90s” he also thought doing cross-platform wasn’t difficult :-) Mike was an incredibly important force in balancing all we needed to do at the time. Again, promise and deliver.

On my C++ team, which reported to Mike, we were so struggling with focusing on Windows that we had to do a “review” meeting with him. Now Microsoft review meetings at the time were of two totally different types. There were the Systems meetings (Windows, DOS, OS/2, and then NT) which were like the BillG reviews that had become infamous. Apps meetings were much more structured and much less "in the weeds”. We prepared a deck that probably had 50 slides in it, which we could never cover but would have been typical at the time. We went through a bunch of slides. We were running out of time and jumped to the end with a slide that raised the question “Should we do a Windows hosted toolset for building Windows or stick to command line”. We were torn and debating but we knew Windows was the strategy but so was everything else we thought.

Mike looked at us, rings of people encircling the big table. He asked, “How long have y'all been thinking about this problem?” Of course it had been months, day and night, but mostly we just froze. He then said in his best Oklahoma accent “There’s a lot here…much more than I can absorb in an hour. How long have y’all been working on these foils and this problem?” More silence.

Mike then followed up, “Y’all been working on this longer than me, and know more than I will ever know. Why don’t you just tell me what you decided to do and then we can move the project forward?” That wasn't how Microsoft worked, the more senior a person was the more certain they were and did not hesitate to tell you that. He was teaching us how that doesn't really scale.

We continued to debate in front of Mike, the two sides of the C++ team. The big issue was a Windows product might take longer and appeal to fewer customers which meant losing competitively perhaps. Mike looked at us and told us to tell him when we could ship and why that would be a competitive win, and then just ship that when we said. That’s what we did. Probably the most important meeting of my career.

With Office doing well, Mike took the next step which was to build an organization that represented the product that needed to be built and that customers were buying. Mike had created the organization that got us to this point. In what you don’t see all that often, he then put the right people in place to create a new kind of organization, one structured to build and deliver Office. He did so by undoing some of the things he did. He did so with such grace and clarity that it became a model that I and others would follow through many more generations of Office and then a few generations of Windows.

That change was in a sense a passing of the torch to those that Mike had mentored and managed through his original Apps organization to this new organization (this is also when I joined Office). Mike became one of the three most senior executives in the company with Bill and Steve, leading the entirety of Microsoft product development. In this immense role he was the executive leading Windows and Office and everything from tools to games to enterprise servers. Everyone came to benefit from Mike’s leadership and cultural transformation.

Mike retired to his beloved ranch in the summer of 1995. Of course he didn’t quite retire. He served as an advisor, investor, board director, educator, and more to companies of all kinds. I had an incredible opportunity to teach a class at Stanford with him in 2016 when he made a trip out. I was overcome with emotion when the first slide went up — my name next to Mike’s on a slide.

I wanted to remember Mike’s professional life but his life outside of work was everything to him and a source of support and inspiration. Mike’s wife Carolyn was ever-present at Microsoft events always remembering each of us and survives him in Texas. Mike’s son, Mike Jr. is himself a legendary investor and founder of Floodgate in Palo Alto, and he is also an author of the recently published Pattern Breakers: Why Some Start-Ups Change the Future.

I owe everything in my professional life to Mike, most literally everything.

Many have so much to be thankful for the contributions Mike made to the computer industry.

May Mike’s memory be a blessing.

Tren Griffin, long-time Microsoft employee, wrote an essay “A Dozen Things I’ve Learned From Mike Maples Sr. About Business and Investing” which also references a wonderfully detailed interview with Bob Gaskins, founder of Forethought/PowerPoint, should you wish to read more about the impact Mike had during his time at Microsoft and after.

The WSJ published an obituary on February 5 (online and in print on February 8). It included this wonderful photo of Steve Ballmer, Mike Maples, Frank Gaudette, and Bill Gates as well as this candid from Esther Dyson’s PC Forum in 1989 (Jon Lazarus and Pam Edstrom on the left and right).

Credit: Microsoft
Credit: Esther Dyson https://www.flickr.com/photos/pcforum/with/5711796519

]]>
<![CDATA[225. Systems Ideas that Sound Good But Almost Never Work—"Let's just…"]]>https://hardcoresoftware.learningbyshipping.com/p/225-systems-ideas-that-sound-goodhttps://hardcoresoftware.learningbyshipping.com/p/225-systems-ideas-that-sound-goodSat, 28 Dec 2024 23:00:58 GMT

@Martin_Casado tweeted some wisdom (as he often does) in this:

Image

He asked what else and I replied with a quick list. Below is “why” these don’t work. I offer this recognizing engineering is also a social science and what works/does not work is context dependent. One life lesson is that every time you say to an engineer (or post on X) that something won’t work it quickly becomes a challenge to prove otherwise. That’s why most of engineering management (and software architecture) is a combination of “rules of thumb” and lessons learned the hard way.

I started my list with “let’s just” because 9 out of 10 times when someone says “let’s just” what follows is going to be ultimately way more complicated than anyone in the room thought it would be. I’m going to say “9 out of 10 times” a lot below on purpose because…experience. I offer an example of two below but for each there are probably a half dozen I lived through.

So why do these below “almost never work”?

Subscribe now

Image

Let's just make it pluggable. When you are pretty sure one implementation won’t work you think “I know, we’ll let developers or maybe others down the road use the same architecture and just slot in a new implementation". Then everyone calling the APIs magically get some improvement or new capability without changing anything. There’s an old saying “the API is the behavior not the header file/documentation”. Almost nothing is pluggable to the degree that it “just works”. The most pluggable components of modern software are probably device drivers, which enabled the modern computer but worked so poorly they are either no longer allowed or modern OSs have been building their own for a decade. The only way something is truly pluggable is if a second implementation is designed at the exact same time as the primary implementation. Then at least you have a proof it can work…one time.

Let's just add an API. Countless products/companies have sprinted to some level of success and then decided “we need to be a platform and have developers” and soon enough there is an API. The problem with offering an API is multidimensional. First, being an API provider is itself a whole mindset and skill where you constantly trade compatibility and interoperability for features, where you are constrained in how things change because of legacy behavior or performance characteristics, and you can basically never move the cheese around. More importantly, an offering an API doesn’t mean anyone wants to use it. Almost every new API comes up because the co/product wants features, but it doesn’t want to prioritize them enough (too small a market, too vertical, too domain specific, etc.) and the theory is the API will be “evangelized” to some partner in the space. Turns out those people are not sitting around waiting to fill in holes in your product. They have a business and customers too who don’t want to buy into yet another product to solve their problem. Having an API—being a platform—is a serious business with real demands. There is magic in building a platform but rarely does one come about by simply “offering some APIs” and even if it does, the chances it provides an economic base for third parties are slim. Tough stuff! Hence the reward.

Let's abstract that one more time. One of the wisest computer scientists I ever got to work with was the legend Butler Lampson (Xerox, MIT, Microsoft, etc) who once said, "All problems in computer science can be solved by another level of indirection" (the "fundamental theorem of software engineering” as it is known). There is truth to this—real truth. Two things on why this fails. First, often engineers know this ahead of time, so they put in abstractions in the architecture too soon. Windows NT is riddled with excess abstractions that were never really used primarily because they were there from the start before there was a real plan to use them. I would contrast this with Mac OS evolution where abstractions that seemed odd appeared useful two releases later because there was a plan. Second, abstractions added after the fact can become very messy to maintain, difficult to secure, and challenging to performance optimize. Because of that you end up with too much code that does not use the new abstraction. Then you have a maintenance headache.

Let's make that asynchronous. Most of the first 25 years of computer science was figuring out how to make things work asynchronously. If you were a graduate student in the 1980s you spent whole courses talking about dining philosophers or producers-consumers or sleeping barbers. Today’s world has mostly abstracted this problem away for most engineers who just operate by the rules at the data level. But at the user experience level there remains a desire to try to get more stuff done and never have people wait. Web frameworks have done great work to abstract this. But 9 out of 10 times once you go outside a framework or the data layer and think you can manage asynchrony yourself, you’ll do great except for the bug that will show up a year from now that you will never be able to reproduce. Hopefully it won’t be a data corruption issue, but I warned you.

Let's just add access controls later. When we weren’t talking about philosophers using chopsticks in grad school we were debating where exactly in a system access control should be. Today’s world is vastly more complex than the days of theoretical debates about access control because systems are under constant attack. Of course everyone knows systems need to be secure from the get-go, yet the pace to get to market means almost no system has fully thought through the access control/security model from the start. There’s almost no way to get the design of access controls to a product right unless you are thinking of that from the customer and adversary perspective from the start. No matter how expeditious it might feel, you will either fail or need to rewrite the product down the road and that will be a horrible experience for everyone including customers.

Let's just sync the data. In this world of multiple devices, SaaS apps, or data stores it is super common to hear someone chime in “why don’t we just sync the data”? Ha. @ROzzie (Ray Ozzie) who got his start on the Plato product, invented Lotus Notes, as well as Groove and Talko, and led the formation of Microsoft Azure was a pioneer in client/server and data sync. His words of wisdom, “synchronization is a hard problem”. And in computer science a hard problem means it is super difficult and fraught with challenges that can only be learned by experience. This problem is difficult enough with a full semantic and transacted data store, but once it gets to synchronizing blobs or unstructured data or worse involves data translation of some kind, then it very quickly becomes enormously difficult. Almost never do you want to base a solution on synchronizing data. This is why there are multi-billion dollar companies that do sync.

Let's make it cross-platform. I have been having this debate my whole computing life. Every time it comes up someone shows me something that they wrote that they believe works “great” cross platform, or someone tells me about Unity and games. Really clever people think that they can just say “the web”. I get that but I’m still right :-) When you commit to making something cross platform, no matter how customer focused and good your intentions are, you are committing to build an operating system, a cloud provider, or a browser. As much as you think you’re building your own thing, by committing to cross-platform you are essentially building one of those by just “adding a level of indirection” (see above by Butler Lampson). You think you can just make a pluggable platform (see above). The repeated reality of cross-platform is that it works well two times. It works when platforms are new—when the cloud was compute and simple storage for example—and then being an abstraction across two players doing that simple thing makes sense. It works when your application/product is new and simple. Both of those fail as you diverge from the underlying platform or as you build capabilities that are expressed wildly differently on each target. A most “famous” example for me is when Microsoft gave up on building Mac software precisely because it became too difficult to make Office for Mac and Windows from the same code—realize Microsoft essentially existed because it made its business building cross-platform apps. That worked when an OS API was 100 pages of docs, and each OS was derived from CP/M. We forked the Office code in 1998 and never looked back. Every day I use Mac Office I can see how even today it remains impossible to do a great job across platforms. You want more of my view on this please see -> https://medium.learningbyshipping.com/divergent-thoughts-on-cross-platform-updated-68a925a45a83

Let's just enable escape to native. Since cross-platform only works for a brief time one of the most common solutions frameworks and API abstractions offer is the ability to “escape to native. The idea is that the platform evolved or added features that the framework/abstraction doesn’t (yet?) expose, presumably because it has to build a whole implementation for the other targets that don’t yet have the capability. This really sounds great on paper. It too never works, more than 9 out of 10 times. The reason is pretty simple. The framework or API you are using that abstracts out some native capability always maintains some state or a cache of what is going on within the abstraction it created. When you call the underlying native platform, you muck with data structures and state that the framework doesn’t know about. Many frameworks provide elaborate mechanisms to exchange data or state information from your “escape to native” code back to the framework. That can work a little bit but in a world of automatic memory management is a solution akin to malloc/free and I am certain no one today would argue for that architecture :-)

Have I always been a strong “no” on all of these? Of course not. Can you choose these approaches, and they work? Yes, of course you can. There’s always some context where these might work, but most of the time you just don’t need them and there’s a better way. Always solve with first principles and don’t just to a software pattern that is so failure prone.

—Steven

]]>
<![CDATA[224. Books to Read and Gift (2024 EoY Edition)]]>https://hardcoresoftware.learningbyshipping.com/p/224-books-to-read-and-gift-2024-eoyhttps://hardcoresoftware.learningbyshipping.com/p/224-books-to-read-and-gift-2024-eoySun, 01 Dec 2024 16:02:13 GMT

Here's a Books I Read in 2024 list. Some of them I enjoyed. Some of them drove me bonkers. I like that part of reading. Hope you enjoy this list.

This year I included some "thoughts" which in no way a full review but offer more than just the title and jacket (new this year…emojis). I enjoyed some long books this year that took two weeks :-) I have always liked to give books I've read to friends and to teams I've worked with. Each below is available print, kindle, and audio. Links are FYI and non revenue generating.

I'm not trying to make a statement with this list or thoughts. Just sharing how I felt after the read. Ordered in the order I read them. This post also on X.

Thanks for reading Learning by Shipping posts by Steven Sinofsky! Subscribe for free to receive new posts directly to your inbox. Check out Hardcore Software for my Microsoft history.

  1. 💯 The Power Elite b C. Wright Mills https://a.co/d/gjHfCWZ // I read this classic 1956 book in a college course (Power and Elites in Society) that I found to be particularly influential to me and felt it was a good time to re-read. It should really make you think even more today. From the jacket: “The Power Elite stands as a contemporary classic of social science and social criticism. C. Wright Mills examines and critiques the organization of power in the United States, calling attention to three firmly interlocked prongs of power: the military, corporate, and political elite. The Power Elite can be read as a good account of what was taking place in America at the time [1950s] it was written, but its underlying question of whether America is as democratic in practice as it is in theory continues to matter very much today.” // I feel like everyone should read this and think about it. It is very easy to use this work to argue one side or another when the real question is “what if anything to do about it” and alternatively did we have any structural changes that changed this trajectory or did we substitute one set of structures for another equally problematic along the lines of “we become what we vanquish”. I think a lot about the formation of the intelligence community (OSS-CIA) and compare the early days to today, as seen in “The Good Shepherd”. If you liked Burnhams “Managerial State” or "Company Man" by Sampson then this is a natural follow on but also a different perspective. Note, this is a decidedly old school essay. It isn’t bullet points or footnoted. It’s a thought-piece though the conclusions and "jabs" along the way are decidedly one-sided if not particularly well-supported beyond anecdotes.

  2. 😻Big Intel: How the CIA Went from Cold War Heroes to Deep State Villains by J. Michael Waller https://a.co/d/d14cMmJ // This is a book that starts from the recent events in massive failures of the national security “apparatus” and both looks backward for root cause and forwards for how to fix it. Now that we know everything from WDM to proximal origin of COVID to Steele dossier to 51 NatSec leaders say ‘Russian hoax’ were all massive intelligence failures the question is how, why, and what to do about that. This would all read as paranoid conspiracy theory had we not just lived through the gaslighting of the recent failures. Some stuff does get pretty far out there or just personal to the author.

  3. 💯😻 Firepower: How Weapons Shaped Warfare by Paul Lockhart https://a.co/d/6IJrGjM // This is a fantastic book about guns/munitions used in war. It is not a technical book that has specs on every weapon from every country (that kind of big coffee table book I loved years ago). It is a book that puts arms in the context of war in a historic sense. So so relevant to today. But really this is a book about innovation, inventors/founders, as well as disruption. It is my favorite book this year about “making things”. The book traces the role of weapons and firepower from the end of the Middle Ages to the end of World War II. The role innovators, needs of soldiers, local conditions, and costs all factored into what worked. It is an incredible book. With all the changes we’re seeing in war today (Israel and Ukraine and drones for example) and what is almost certainly going to be a totally different next big war if there is one, there’s a lot to think about. Founders should read this book for the long arc no matter your view of war or guns.

  4. 👎Nexus: A Brief History of Information Networks from the Stone Age to AI b Yuval Noah Harari https://a.co/d/cqDMkLK // up front I have to say this was the rare book I gave up on half way through. from the author of “Sapiens” comes this book that takes on the very broad topic of “what is information”. Let me say about any book that says the Internet and information is a "threat to democracy"—I have a very hard time taking the book seriously, especially after this book's historical account of printing presses of the printing press. Also includes the tired assertions about “algorithms” and extends it to “agents”. For many this will flip the bozo bit in the whole book and thesis. Following that the book took on an AI doomerism tone with all the standard and poorly understood (by author) tropes. That said let me talk about the book a little bit. It takes us through the history of information such as stories and books and builds up all the way to a very modern view of AI and information. It has a good number of essays that could be good stand alone especially the first chapters like the section on bureaucracy and documents. I enjoyed this part but make sure to allocate time to take in the breadth as it isn’t for the faint hearted. Also important to not read this through a political lens and recognize the views expressed are an historian with an obvious present day bias selecting which sources to weigh more than others—the selection makes the author’s personal views fairly clear. For example, the author sites a mid 1400s document from the Pope, giving permission to destroy, takeover, and convert other (Pagan) religions in the name of the Church. But this is hundreds of years after the Muslim expansion had started and thus many recognize that such authorization came as a result of defending the Church and recapturing what was lost by a pacifist Church, not really expanding it. I appreciated the history of the overlap of printing and witchcraft. The basic idea is that the free exchange of information also could lead to the free exchange of that erroneous or conspiratorial information, such as witches. But that leads to the need for society to have information curators. Which of course sounds good on paper. But then you overlay the Church and you realize that the curators themselves are susceptible to human fallibility. This is why the free exchange of information was necessary, but not sufficient for the expansion of science. Because the church quashed everything that was counter to the teachings of the church, even though it was a new form of scientific information. There’s a lot more as it gets into AI but I did feel it became more of an (not well-informed) opinion essay and less of a substantial work and disagreed with the extrapolations or implications presented. The goal seems to be about democracy but the conclusions felt rather extreme in putting power in the hands of those with self-perceived enlightened views.

  5. 😻😻😻 Civilian Warriors: The Inside Story of Blackwater and the Unsung Heroes of the War on Terror by Erik Prince (2014) https://a.co/d/horKsBs // Reading about the Crusades made me want to learn more about how the US fought in the Middle East. There was so much controversy over “mercenaries” and the role they played in “profiteering” (many congressional hearings and Sunday talk shows) but I also knew paid support of the military had a long history in the US not to mention all global conflicts. What I didn’t realize was just how big the US government (not just military) came to depend on contracts for support. This is also the story of an entrepreneur, a soldier, and a person on their own life journey. It is a fascinating story. It is super important to go into reading this aware or willing to acknowledge that there’s no world order without private military contractors and calling them mercenaries or thinking they are somehow “extra-jurisdictional” doing things that are sketchy then you will miss what is an important founder story and significant issue for executing our global foreign policy. Recommended for founders too since that is ultimately the story.

  6. 👎A Colony in a Nation by Chris Hayes https://a.co/d/5hCoRgh // This book is a pretty standard view of the current racial situation in the US—the view that there are really two societies in the US (the title comes from a President Nixon speech during the upheavals of the late 1960s). I don’t think there is any new territory the book covers and the focus is on the notion of institutional white racism or “white fear” though it tries to build on the more recent events such as the Ferguson MO protests and BLM movement. The book as is usually the case tends to overplay anecdotes and say the statistics are just symptoms of deeper problems. While the book tended to anchor on the BLM movement it was only in a mere footnote did it mention hands up don’t shoot” was fake news and in fact Michael Brown was moving towards soldiers in a “threatening manner” and repeats the trope that “small number of young men looting and rioting” to excuse the wave of violence across the country that many of us lived through (Seattle CHAZ). There are plenty of anecdotes from the authors life growing up in the Bronx and attending Brown University. ultimately this book feels like a sum of defund the police, empty prisons, remove sentencing guidelines, etc and move accountability for crimes from criminals to white “fear”. I was hoping to gain some new perspective. In a world where our cities have been rendered unrecognizable it is tough to think the same approaches make sense. Of course the claim is that the true best approach (treating crime like a social problem and not a violence problem) have not been fully implemented. This debate structure quickly descends into the college debate about Marxism not having been fully implemented in Russia/USSR.

  7. 😻😻 😻😻Sword and Scimitar: Fourteen Centuries of War between Islam and the West by Raymond Ibrahim https://a.co/d/2mGfL0x // With all going on in the Middle East it is easy to lose sight of the history of the region. In particular most in the modern debate (like on X) often talked as though history started in 1948. It is worth reading different perspectives on the history of the battles across antiquity, from the pre Christian era through the Roman Empire and the rise of Islam etc. the violence of the rise, expansion, and then retaking of Europe between Christendom and Islam say a great deal about what is going on in the world today and importantly provide context about why what is happening, at least for many many, is hardly localized to Israel and doesn’t end there. Those that think what is going on is a fringe part of one religion or another would be served to read a perspective that takes a strong point of view and this book is that. There’s a very deep and to some deeply concerning question about the ability to find peace between Christendom and Islam. This book is incredibly well sourced and noted. It cites specific original writings which are often the subject of debates over “context” in modern dialogs.

  8. 😻😻 😻Defenders of the West: The Christian Heroes Who Stood Against Islam by Raymond Ibrahim https://a.co/d/5BZcQBz // This book should be required reading for those not well-versed in the “why” of the crusades and the way the west pushed back on the colonialist expansion of the islamic world, not to mention the brutality all around. Well known scholar and an Arabic language specialist for the Near East section of the Library of Congress Ibrahim tells the story of 8 defenders of Christendom against the challenges of brutal Islamic imperialism. A sequel in some sense to his previous book on the history of the Christendom - Islamic cultures. Especially today, one might think the 1400 year history of Islamic culture is a peaceful religion on the defense against Western Europe. In fact the expansive and violent nature of the history is well worth understanding. His writing is viewed by some as propaganda but it is history told by an Arabic scholar and translator. One might argue that the dislike of his scholarship is due to the incredible control the Islamic culture exerts over any scholarship that might be seen by them as critical of their history or worse the Prophet himself.

  9. 😻😻 😻 😻 The Strange Death of Europe: Immigration, Identity, Islam by Douglas Murray (2018) https://a.co/d/8nQvHw8 // This is an important topic and excellent book but a difficult one to discuss, at both a personal level and general level. What is culture? What is multi-cultural? What does it mean to be an immigrant? Is there a healthy level of immigration? What is guilt over the past? What is asylum or refugee status? When and why did collective wisdom stop being about assimilation. The topic is about the way UK/Europe has lost its identity due to immigration and the erosion of what made Europe, Europe. It is a super interesting topic. Personally, since all my relatives “immigrated” (eg escaped) from Europe and generations before them “immigrated” (eg escaped) to Europe it is close to home. The book starts with an early 2000s prediction that the majority of citizens in London will be neither white nor Christian by the end of the lives of everyone alive at the time. It was both absurd and “-ist” to make statements like that. Yet just two decades later, 44% of London identifies as white Christian and 1/3rd of households have no English speakers. That’s how the book starts. The US in major cities is of course is much the same. All of these “fears” were discussed in the early 20th century when my grandparents and great grandparents arrived. The biggest difference was one of assimilation. That’s why this is so personal as a product of assimilation versus cultural continuity for those that arrived. Chapter 10 on “European guilt” is required reading for those who have thoughts on reparations, genocide, or apologizing for the past. This is happening. I recommend reading about it.

  10. 😻😻The Parasitic Mind: How Infectious Ideas Are Killing Common Sense by Gad Saad https://a.co/d/6Wcgx1w // The author is a Lebanese-born, Arab-first-speaking jew who as a child fled his country under gunfire to grow up in Canada. Almost no one reading this list will read this book. It is one of those books that the built-in audience gets a ton of confirmation and those not reading it just assume it is not worth the effort and probably only know the author as what would be called a twitter troll for some battles with well-known people. The book is better thought of as an essay and strongly formed and supported opinion piece. This is not about “proof” even though the writer is a credentialed social scientist. He’s been on Joe Rogan and quotes Jordon Peterson which is enough to “classify” him for many. Yet that is why it is worth approaching something like this with an open mind. The arguments are sound in many ways but proof is elusive since we’re talking about human behavior—with the exception of the section on Islamophobia and antisemitism which are excellent and provide ample footnotes to rathole. That said, this book offers chapters on likely what originated with this author—the woke mind virus. As a product of 1980s political correctness, I just wish the debate this book took on could receive more rational discourse as the problems described are real. As I read this the discourse around DEI had reached a new conclusion as major universities began to share admission demographics for the incoming 2024 students and classes.

  11. 😻😻😻The Future Was Now: Madmen, Mavericks, and the Epic Sci-Fi Summer of 1982 by Chris Nashawaty https://a.co/d/76JXGmC // wow this book is amazing. It is the “behind the scenes” story of the birth of and creation of modern/epic sci-fi in the summer of 1982. That summer saw 8 landmark films including Tron, Blade Runner, E.T., ST:WoK and more. Really a super fun read. Also that was an amazing summer. Loved this book from when movies were such a part of the summer.

  12. 😻On the Edge The Art of Risking Everything b Nate Silver https://a.co/d/1Tbm2ux // this book was a tricky read for me. I love the underlying topic but the use of poker and sports at fairly extreme detail left me behind. The book is about the culture of risk. I just think it would have been more enjoyable had I a shared connection with those topics Silver is so passionate and articulate about.

  13. 😻😻😻Peace to End All Peace by David Fromkin (1989) https://a.co/d/ab23luJ // I felt the need to dig into a well-regarded book on the pre-1948 history of the Middle East. This is _the_ book AFAICT. It is about the dissolution of the Ottoman Empire at the close of World War I and the consequences for the Western powers, the then Soviet Union and, to a lesser extent, the peoples of the Middle East themselves. At the macro level the book shows the political origins of the present-day Middle East. The book ends with the territorial settlements of 1922, when political lines were drawn that mostly reflect today’s boundaries. The amount of contemporaneous European politics, -isms, and misunderstandings was as surprising to me in reading this book as they are when reading twitter posts today.

  14. 😻😻Dead Aid b Dambisa Moyo (2009) https://www.amazon.com/dp/0374139563 // Have billions in aid sent from wealthy countries to developing African nations has helped to reduce poverty and increase growth? Poverty levels continue to escalate and growth rates have steadily declined—and millions continue to suffer. This book is really a must read for anyone who believes that big philanthropy and global development work are even a net positive. It is irrefutable that the trillion dollars of post-war aid have done more harm than good. This book explains some of the reasons why. I’m re-reading this just as the President wrote off 10’s of billions of student loans and the recent large scale UBI tests failed. Why does simply giving money to a country or an individual fail to work the way it feels like it should? This book makes a strong case and one that is just as true after another 15 years of aid than it has been for the first 60 years. There are other books critical of aid, with the classic being “White Man’s Burden” by Easterly that are probably more “scholarly” though this book is a good read, overview, and kicks off potential solution ideas. The thrust at the end brings us back to what made me reread this—conditional cash transfers—an offshoot of UBI. The recent study shows, at least in the developed world, the lack of efficacy of this solution. The wildest aspect of this book is that if you work in this field at the international level, everything that it said is true, so why do people keep trying the same things over and over again? This is a space that could use some innovation or more likely some invention and rethinking from first principles. What most people are really concerned to mention is the success that China has had. Rather than try to create a government or manipulate society into behaving a certain certain way, China simply turns this entire process into a transactional one. You have money in exchange for roads or resources or other commercial relationships. And if you are corrupt or anything like that, it doesn’t change the fact that you need to deliver. The west view of altruism has always assumed nothing in return in the short term, but in the medium term, some convergence on ideas, government, morality.

  15. 😻😻😻Raven Rock: The Story of the U.S. Government's Secret Plan to Save Itself--While the Rest of Us Die by Garrett Graff https://a.co/d/2CzRcbH // this is a fantastic book tracing the history of the procedures developed to ensure that the government of the US can continue even in the face of nuclear attack on our soil. A vast amount of money and thought have gone into the scenarios, plans, and materiel. As a 70s bomb shelter drills in elementary school and an 80s “survivalist” I have always been fascinated by this topic. Yet the reality is a modern nuclear or chemical war will with near certainty be an extinction event, at least for North America and whoever starts it. Given the fragility we all saw with Covid and our general inability to make it through a flight with poor WiFi it isn’t clear to me, and this book and Nuclear War only emphasize such a conclusion, that any preparation makes sense. As we learned from a War Games, the only way to win is to not play. Scary to think this current leadership thinks they are prepared when for 80 years and 100’s of billions of dollars we were not. The pandemic response shows that the system is not about the population but the government (FEMA is really about the government), which came as a surprise to most (remember the gov sitting on stockpiles of masks, O2, and respirators but not releasing them—that’s why.)

  16. 😻 😻 😻 Nuclear War: A Scenario by Annie Jacobsen https://a.co/d/55vsybE // Do these terms mean anything to you: MAD, fallout maps, LOW, SLBM, doomsday bombers, nuclear winter, SIGINT, Puzzle Palace, SIOP, Ivy Mike, NRO, Cheyenne Mt, MIRV, Triad, Raven Rock? If so then like me you grew up at the height of the threat of nuclear war. If not then you grew up in the “it will never happen” era of 1990 to about 2020. Read this for a reminder of what we grew up with or read it to see the reality of War Games and what the world would go through. With N Korea, Iran, Russia all putting the world at a level of risk not seen it’s worth considering that defense isn’t an option. This new book is filled with first hand accounts from the people who built out our nuclear defense and strategy. It is written as a tick-tock scenario not unlike the myriad of “docu-dramas” we saw throughout growing up like Nuclear Winter by Sagan and others. It really is a must read. Riveting and scary AF. PTSD for boomers and GenX. Incredibly well done and necessary.

  17. 😻 😻 Suicide of the West: An Essay on the Meaning and Destiny of Liberalism b James Burnham https://a.co/d/0eVe2Vhs // From 1964 (!) this book does a great job of framing the world we live in today—a world where we find the progressives/liberals largely siding with entirely anti-liberal/anti-progressive positions/concepts/nations. It is essentially a treatise on the evolution of the concept of liberalism. The book is written in classic Burnham style which means many lists of bullet points/assertions that are expanded upon (one list was 19 items long, another list of “modern liberals in the media” seemed to go on forever.) It often feels like a full assault on the reader but that is also a bit refreshing compared to just broad thematic work. Everything from “Karens” to reverse discrimination to white guilt to asymmetric outrage make a prescient showing. The section on foreign policy—written in the 1960s—could be describing “support decolonization even if it means supporting the enemy sworn to your demise” we see today. If you’re expecting footnotes or proof this is not for you, the title says it all—this is an opinion essay.

  18. 😻 😻 Several Short Sentences About Writing by Verlyn Klinkenborg https://a.co/d/0j2i6HPT // Without going too far, this book is in the spirit of Strunk & White that had such an impression on every Cornell freshman like me. The author brings the idea of a relentless focus on short sentences and clarity. He does so without claiming to offer a recipe or “system” which I really appreciated. Everyone, especially me, would benefit from a deep engagement with the ideas and approaches in this book. One read of this is why generative AI writing is verbose and pretty wordy. It is build off a lot of patterns that followed by people writing “one word after another” versus “one sentence at a time”. Super interesting to think about. Worth reading just for this. Really difficult to do.

  19. 😻 😻 The Hacker Mindset: A 5-Step Methodology for Cracking the System and Achieving Your Dreams b Garrett Gee https://a.co/d/09YSpiQ6 // trapped, frustrated, bored then being a hacker is for you—the book uses a mindset of hacking and merges it with the techniques and language of general self-help. Not at all what I was expecting. “Slackers versus hackers” is about people who are trapped in a system (life, job, etc) versus those breaking or disrupting that system. He does point out he is taking an amoral stance and that bad people can do bad things just like good people can do good things. When I’ve talked about this in the past I used a sports analogy—are you a person that embraced intentional fouls, sacrifice flies, or grounding? Those are the rules hacks of sports. The biggest innovators saw the rules and used them to an advantage even in the face of claims (or risk of penalty) of sportsmanlike conduct. The 6 principles of a hacking life include: 1. Be on Offense 2. Reverse Engineering 3. Living Off the Land 4. Risk 5. Social Engineering 6. Pivot. This is a really important mindset. The title is not what you might think. The book tapered off at the end when it got into starting your own business and was a bit vague and difficult to act on along with basic money management tips that felt a bit out of place. But the framework is great.

  20. 😐 The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy by David Graeber https://a.co/d/5b8z1tV // This was a complex book by someone I know I would not agree with on just about anything. Early in the book, which is more a collection of essays, he introduces the “iron law of liberalism” where any market reform, any government initiative intended to reduce red tape and promote market forces will have the ultimate effect of increasing the total number of regulations, the total amount of paperwork and the total number of bureaucrats the government employs. Given our collective experience it is difficult to disagree with. It is the more hyper-progressive Occupy Wall Street view of solutions I don’t agree with, even if I found myself nodding in agreement with the awfulness of every anecdote.

  21. 😻😻😻😻Stories I Only Tell My Friends: An Autobiography by Rob Lowe https://a.co/d/jeRI7vM // I loved every story and every word of this memoir. And the audiobook was an incredible treat, like having Billy, Danny, or Sam tell the story. Films like Outsiders, Class, About Last Night, St Elmos, Oxford Blues formed the backbone of trips to the Altamonte Mall or Pyramid Mall in college. When I was an RA every girl in the dorm (literally) had that Billy Hicks poster/crush. Beyond that personal connection to his work the memoir is filled with insights about the evolving film industry. I loved for example the sentence “Dustin Hoffman’s opus ‘Agatha’ from a time when a-listers made movies that didn’t have their character’s name in the title.” The deeply personal stories were all tastefully and emotionally told—not at all a gossip. Oh, and the West Wing. Treat yourself to this audiobook. I wish I had done so sooner.

  22. 😻😻😻The End of Race Politics: Arguments for a Colorblind America b Coleman Hughes https://a.co/d/3znEyLc // I am a new Coleman listener having joined his following when I listened to an interview with Israeli author and war cabinet member Benny Morris, an historian with views and books that are criticized from all sides but an enormous depth of knowledge. Coleman works through the very core of the “anti-racist” movement to demonstrate how it is itself racist, what he terms neo-racism. He does an excellent job detailing all the ways the well-intentioned efforts end up disadvantaging the very people they aim to help. The telling is compelling even if you are predisposed to strongly agree or disagree. After reading I looked at some reviews and found out just how much of a hard time he is being given—The NY Times used the headline “The Young Black Conservative Who Grew Up With, and Rejects, D.E.I.” from Feb 2024. One look at that and you can actually see Neo-racism in play, which is awful—how they can call out his skin color in a review about a book on being colorblind is kind of incomprehensible. Read this book.

  23. 😻😻😻Morning After the Revolution: Dispatches from the Wrong Side of History b Nellie Bowles https://a.co/d/9skQNrI // This book is amazing. It captures the absurdity of a “movement” that went haywire during the pandemic. It became a NYT best seller opening week which is amazing and deeply ironic (not to mention rewarding for Nellie a former NYT reporter). The description of Seattle’s “Autonomous Zone” is beyond precious. Don’t let the incredibly entertaining writing take away from the important facts and storytelling of a lens on events that is mostly buried by nearly all press outlets of record. The entire section on “anti-racism” and “Karens” was a tour de force of saying the unsayable. It took me back to my OG resident advisor training at the dawn of anti-racism (c. 1984 when it was just called power dynamics) when I got yelled at for an afternoon for being white and an abuser of power by a facilitator in a pink jumpsuit with colored hair, which even then was a weird thing to tell a jew with living Holocaust relatives. The section on gender and sexuality was both sensitive while pointing out what can be seen by all. I listened to audio narrated by Nellie which makes for an even better read and the occasional great French accent. Detractors will see Nellie as a traitor and will scorn at her descriptions of people, what they said, and events. I think she’s saying what a lot of people felt or feel but were legit afraid to say. The book is a personal memoir not a sermon, which also annoys those who used to agree with her.

  24. 😻 😻 😻 The Genius of the System: Hollywood Filmmaking in the Studio Era b Thomas Schatz https://a.co/d/eQYAzkg // There are a ton of books about classic Hollywood but this one has a unique perspective that will be super valuable to any builders. It talks about the “system” of Hollywood and what made that system and why it worked and why it failed. It tells the story through the lens of big important people we have heard of like Thalberg, Selznick, Zanuck and Hitchcock. It importantly talks about what it took to get movies made and the ever present balance of art and commerce, of story and spectacle. It demonstrates, intentionally or not, that in most endeavors/orgs only a very small number of people really know what to do at any given time/era. There’s even stories of disruption such as how the big studios slow-rolled talkies since they were vertically integrated theaters with orchestras whereas the lesser studios (Warner!) were not set up for any sound. This book should really make you think about the operations of big companies and creative endeavors. Lots of interesting “org” thinking as studios pioneered the “unit” org versus hierarchical general management. Plus even antitrust makes an appearance. And not to be outdone so does censorship. But wow, the salaries in classic Hollywood were crazy—50-100x the typical salary, but also the reality of taking advantage of the writer/director/actor talent (like Cagney, Davis, and more). I know for me it made me think a ton about the “operating system” we built at Microsoft for building software. If the “Fountainhead” describes the idealized view of a founder (or “The Founders”), the “System” describes the idealized BigCo. Another founder book.

  25. 😻Means of Control: How the Hidden Alliance of Tech and Government Is Creating a New American Surveillance State by Byron Tau https://a.co/d/3yiOdeD // When watching a show like “FBI” or the classic “Person of Interest” there are two kinds of people when it comes to privacy and the government. One kind says “dang this is cool and this is how crime should be stopped” and the other says “OMG why in the world should we trust the government to only do good things with all this data and not ruin lives”. It is very tough to see all the benefits of technology, specifically cell phones, GPS, location data and at the same time the risks from global terror or even domestic violence and not think the government should use the tools it has. Yet there need to be limits because it isn’t just abuse but the ability to completely upend the lives of innocent people. I think about on “FBI” when someone just happens to be in the wrong place at the wrong time and sees a crime (or worse taking a cell phone photo of a getaway by accident) and there’s Jubal in the JOC barking out “get that phone, ping it, got it…send OA and Maggie and pick them up” and then an innocent person ends up in the dragnet of the effort. This book is about what the government has done to use not secret eavesdropping but the grey area of “public” data collected for advertising or by apps and data brokers (NOT BIG TECH). It doesn’t do a lot other than freak out you but this is an important topic that is not so simple. The right to privacy is not in the constitution but the framers certainly believed in the right to be left alone. This book breaks my regular guidelines of being a journalist writing about their beat but it is not about one company or frankly even anti-tech. You learn that the worst offenders are the brokers, the parasites riding off (or circumventing) the benefits we all see every day from Google, Apple, Facebook, etc. This is a valuable contribution if looked at through that lens.

  26. 😻😻The Second Brain: A Groundbreaking New Understanding of Nervous Disorders of the Stomach and Intestine (1998) by Michael Gershon https://a.co/d/a5h5TH3 // “My interest in serotonin started in 1958 as an undergraduate at Cornell!” From this in the introduction this book only got better. This incredible book is about how real science is done and real discoveries made. I got a great sense for how good this would be early in the book when Gershon said “this was back when the government funded science experiments even when it wasn’t sure the experiment would even work.” (This book is from 1998!) There’s attitude, rivalry, personalities, success, failure, good ideas, wrong ideas, stubbornness, persistence, collaboration, and everything in between. The descriptions of the people he collaborates with and those that are in his lab are so wonderfully warm and amazing. It is a memoir and a book of discovery. Shout out to @davidacrotty and memories of transgenic megacolon mice from the 80s and that time my car got broken into at 168th st. The science in the book is real as is the humility and humanity of discovery. This is a “builder” book for the year. Loved it.

  27. 😻Zero Days by Ruth Ware https://a.co/d/bQ4T4ym // this was super fun and fulfilled my one-per-year fiction. One part of it bugged me a lot and was key plot point that was technically off. Would have been fine in a screenplay but not a book (2fa via SMS). As usual/expected did not need the self-reflecting essay/monologue at the end. A good one for the airplane or beach.

  28. 😻The Art of More: How Mathematics Created Civilization b Michael Brooks https://a.co/d/fYuA9es // AI is just math. When thinking about regulating AI it is a good idea to have some context about the history of math. There are a lot of books about math history. I enjoyed this one in how it presents some deeper examples but presented in a way that makes them fun. Really enjoyed some of the history I had not come across such as logarithms. Pay attention to the PDF or get print.

  29. 😻Four Battlegrounds: Power in the Age of Artificial Intelligence by Paul Scharre https://a.co/d/ezqmf9J // former Ranger and defense analyst at pentagon argues that four key elements define this struggle: data, computing power, talent, and institutions. A major theme is the risk of China and in this regard it talks about the importance of AI in defense, the huge negatives of AI as used in Chinese society, and the many places the US is conflicted in China (there’s a lot devoted to the role Microsoft played with Microsoft Research in elevating China’s AI work.) The defense arguments are strong yet much of the civilian use starts to sound like the fairly standard/mid “AI is risky and we need to get ahead of it”. Where the author was aggressive in the use of AI in the military (personally having been an advocate in the military) he is reluctant or concerned about civilian uses and uses by government outside military. At the same time there is clear recognition that the cat is out of the bag. There’s an entertaining section on Microsoft’s MSR lab in Beijing and how instrumental it was to the *creation* of China’s AI ecosystem, which I’m sure is how people feel but definitely looked better a few years ago (if you read Mandarin you can read about how I was a negative influence wrt China on that earlier era in a book by Kai-Fu Lee). The irony of this view today is also just how much effort Microsoft was putting into elevating MSRA all the while OpenAI is gaining momentum and Microsoft had no product offerings of its own on this “foundational research” yet had no problem giving tours and touting the whole of MSRA. The section on TikTok does the standard “algorithm is the problem” which for me is always a bit of a🚩for what follows. I really enjoyed the section towards the end bringing together topics and detailing the future of war. It had shares of Terminator and ST:ToS “Taste of Armageddon” and “Ultimate Computer.” There not a ton new here but the perspective of a DC/military insider brings value and I very much appreciate how frequently Scharre was on the ground at least hearing firsthand what was going on and telling us when he was.

  30. 😻😻The Metaverse: And How It Will Revolutionize Everything b Matthew Ball https://a.co/d/eLb9AWb // Given all that is going on with AVP and Meta I would suggest reading this. If you think the EU is doing wonderful work on DMA, at least read the chapter on Payments to understand history behind 30% and how either EU should be going after gaming companies or EU was short sighted in going after only Apple or that the whole thing is a farce because the market works. This book, for me, is a sharp contrast with Read, Write, Own. Both books acknowledge the power-dynamic with big incumbents where RWO focuses on building and creating a new way to do new things in the market, The Metaverse seems to focus too much on how to extend the existing games market and “divide the pie” versus grow it. It takes as a given the big companies needing to be regulated, even at one point towards the end simply suggesting that it doesn’t seem fair to regulate them after creating the platforms we all use, but that is the price of success.

  31. 😻😻😻💯Read Write Own: Building the Next Era of the Internet b Chris Dixon https://a.co/d/3wtIe9o // Chris is a friend. He’s also written some of the best thought-provoking essays of the internet era (come for the tool stay for the network, the internet is for snacking, idea maze, climbing the wrong hill, hobbies/what the smartest people do on weekends). But please if you work using the internet you must read this book. First, grasp the historical description of the culture and origins of the Internet. This background is incredibly important to understand the “problem space” technologies like blockchain are going after. And second, even if you can’t see past the “casino” (as Chris calls it) there’s an enormous amount to consider that chris describes when it comes to improving and reinventing the internet by returning to the roots of the “movement” that started it. An analogy that works for me is that in the 1980s a lot of the commercial world was against “free software”—which many saw as the heart and soul of the internet—for obvious reasons (myself included as part of Microsoft). Then as the movement changed approaches and companies like Google, Meta, etc. joined and became contributors to the “open source” movement it became much more acceptable everywhere. There will always be casinos (as chris points out) but there will also be a strong rationale for what comes out of the openness and culture of the internet. It is really important not to let the casino players mess with learning—if only because one might not have noticed there were land grabs/casinos at every step of computing that went unnoticed by some.

  32. 😻😻Intellectuals and Society by Thomas Sowell https://a.co/d/1Rl2Qp6 // With all the stuff going on with university presidents and more importantly the debate over the integrity of basic research (Stanford, Harvard Medical, even all of Covid) I wanted to reread this book from 2010. Who are the “public intellectuals” and why do they have so much influence compared to the people “in the arena” (my words)? Fundamentally this is an intellectual critique of intellectualism. It is a social science book questioning the validity of social science. But it is also compelling in the context of today’s punditry.

  33. 😻😻The End of the World Is Just the Beginning: Mapping the Collapse of Globalization b. Peter Zeihan https://a.co/d/cbv8gvg // I think there are real challenges in the global order (as it is called) as we saw with Covid and supply chain. This book is almost doomerism comes to the world order and economy. Reminds me a great deal of 1980s Howard Ruff’s “how to prosper during the coming bad times”. Very much a “story” and not a deep economic reference or footnoted analysis. Audio version is like a 17 hour TED talk. Will some be true? Absolutely yes. Just which parts and when? But makes you think and/or stress. If you want an optimistic view of what happens in the US after “globalism” ends, this is it. Go America!

  34. 😻😻How Innovation Works: And Why It Flourishes in Freedom b Matt Ridley https://a.co/d/iWD57Z1 // Really two books. First is a series of well-done vignettes on a good number of innovations and how they really came to be. Chances are readers will know one or more of these stories but they are well told. Book is structured as a well organized essay taking those examples and developing a framework that is highly tilted towards the thesis that free markets, free exchange of ideas, ability to try and fail, and to share matter a huge amount. Every time someone takes this approach they immediately rush to point out the really big inventions came from government (like the computers, internet, medicine). Though those champions always minimize the role that war or nationalism played in those. It might be impossible to generalize about innovation and perhaps that is the real point—it takes a lot of approaches to “innovate” though masterminding or inventing are not the ones that tend to work best. I liked this was an Issacsson-like book but pivoted by innovation more than personality. A lot of stories are classic anecdotes and I wish there were primary sources as I picked up on a few things too many that left me feeling uncertain or that a community note would appear. The take down of EU regulatory failures and misinformation is worth the book. I did enjoy this book and it is would make for a really great “book club” discussion on innovation.

  35. 😻The Canceling of the American Mind: Cancel Culture Undermines Trust and Threatens Us All―But There Is a Solution by Greg Lukianoff and Rikki Schlott https://a.co/d/02NDDgt // started reading just before university president congressional hearings. So many examples and anecdotes. There’s a whole chapter on the gaslighting of “there is no cancel culture”. It is easy to get caught up debunking this book, but it even has a section on how arguments against cancel culture are routinely made.

  36. 😻Surveillance State: Inside China's Quest to Launch a New Era of Social Control b Josh Chin, Liza Lin https://a.co/d/eqDeAVV // all about surveillance and the growing risk to privacy and individual liberty. Shows the depth of the development of Chinas infrastructure and exporting it around the world. But the real stressful part is how the US and far less overtly dramatically increased the surveillance state with too little accountability. This topic drives me bonkers as far as US policy but I feel helpless.

  37. 😻😻The Lessons of History b Will & Ariel Durant https://a.co/d/5fLQaXE // A concise survey of the culture and civilization of mankind, The Lessons of History is the result of a lifetime of research from Pulitzer Prize–winning historians Will and Ariel Durant. “Do we really know what history was or what really happened or is it simply a fable not quite agreed upon? Most history is guessing and the rest is prejudice.“ I wish I could read all 10,000 pages of their “The Story of Civilization”. Still the book has many cringe moments such as ”you have to breed as well as breathe.” “Throughout history man remains the same. He changes his habits but not his instincts.” The Durants were the foremost award-winning historians of a generation. Yet when I listened to this book (audio has excerpts of Durant interviews) I could not help but be taken back to 6th grade world history with Mr Huston saying things like “the negroes of the Nile” and “nature versus nurture” or “decline of religious belief is far more important than any economic battle” Durant says we are in a post-theological time in the west which is necessarily a decline if you consider history. Listening to the Durants speak was like listening to the narration record accompanying a dukane project filmstrip. Thus the best reason to read this is to think about what is taught today as history and realize it too will be cringe in 40 years or less.

  38. 😻😻Classified: The Untold Story of Racial Classification in America b David E. Bernstein https://a.co/d/cTqkSGB // With all the debates over oppressed v oppressor looking at how we even categorize people in the US for things like jobs, hate crimes, preferences, etc. is critical. This book is an **infuriating** look at the arbitrary and cynical world of racial and ethnic classification. It is filled with the absurdity of the legislation, court battles, and cynical application of what amount to fairly nonsensical classifications when considering the goal. This book is must reading for anyone arguing for any disparate impact or remedy based on a classification. The system is abused from every direction and the resulting benefits go disproportionately to those not particularly harmed while the negative results continue to grow. If you really want to be upset read the chapter on race and medical research. The whole concept has gone so far astray from the original goals it is no surprise no one can agree on these topics today. The use of classifications for science research even though they are wholly unscientific is not just bad science it is harmful. tl;dr It is amazing how much a role money plays in all this.

  39. 😻😻😻😻💯Chip War: The Quest to Dominate the World's Most Critical Technology b Chris Miller https://a.co/d/02Sz24S // Great quick history of Silicon. A -00-level overview of the history of silicon. Why read? Everyone needs to know the story of DRAM in the 80s especially Japan, DoD, and US companies. Also great characterization of chip company culture, founders, and execs (tl;dr very tough managers). Follow the links mentioned and keep reading (eg Made In Japan, Only the Paranoid, etc). Super fun!! Given the runup of chip stocks this seems like a required read.

Suggest some books to me in the replies! Happy 2025!

# # # # #

]]>
<![CDATA[223. On the Toll of Being a Disruptor]]>https://hardcoresoftware.learningbyshipping.com/p/223-on-the-toll-of-being-a-disruptorhttps://hardcoresoftware.learningbyshipping.com/p/223-on-the-toll-of-being-a-disruptorTue, 26 Nov 2024 03:15:40 GMTIt is tempting to want to see the positives of technology disruption as distinct from the disruptive nature of the person or organization causing disruption. As we all know it is pretty easy to be disruptive in meetings, the workplace, or online without being a force for a disruptive innovation. That causes us to wonder if it is possible to lead such change without being disruptive. In practice being disruptive is probably a necessary part of bringing a disruptive change to the world. It is not pleasant for either the incumbent or the protagonist, but it might be a necessary part of getting something done.

Image

The concept of “disruptive” technologies or more broadly disruption in business goes back to the 1940s and economist Joseph Schumpeter and his concept called “creative destruction”. Creative destruction in the simplest terms is “the deliberate dismantling of established processes in order to make way for improved methods of production”. It can be applied to a broad set of circumstances across technology, government, finance, media, and more. As an economist he was keenly aware of the effects of such destruction, both the intended and unintended, or byproducts of such a seemingly violent way to innovate.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe receive posts directly. Also check out my Microsoft history/memoir.

In the mid to late 1990s, Clay Christensen and Joseph Bower coined the term “disruptive innovation” the timing of which coincided with the rise of the modern internet and the broad-based information age. It was in part due to that timing and the businesses successes that followed that the literal meaning became second to the broad and appealing metaphor of being disruptive innovators.

Clay’s (with Bower) Harvard Business Review paper “Disruptive Technologies: Catching the Wave” and later his book “Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail” became beacons of technologists everywhere. Everyone claimed to be disrupting something. No matter what you were doing you were in the business of disrupting some old way of doing things with your new software and internet way of getting things done. From 1998-2000 (March 10th specifically) delivering groceries, selling pet food, and even creating internet money were all major disruptive efforts—and colossal failures.

Image
The HBR article that preceded the classic book.

We make fun of those failures today. Yet those efforts and many more proved to be not only successful years (or decades) later but wildly successful. What we forget is how much we mocked the people creating these efforts and many like them. What we forget is the litany of experts who predicted their failure. What we forget is just how difficult it was to change things.

Image
The pets.com sock puppet became a symbol of disruptive hubris. Yet consider how many of us rely on chewy.com today.

One example from that era I will never forget is listening to the CEO and founder of one of the major shipping companies in the world explain in painstaking detail how “local delivery” (that’s what it was called when stuff gets delivered to your house) could never be any faster than the overnight shipment that was then standard, extremely expensive, and used only for very important documents or emergency machine parts and the like. The modeling put forward was detailed and excruciating. It explained why there was no way to ever deliver food or groceries from the source to a home. As a bonus he went on to explain how shipping had a baseline cost that precluded making it free and that returns of goods would always be an exception, which is why there were limits to how much one would end up buying via new-fangled internet shopping.

Image
From "The disastrous rise and fall of a $10 billion Bay Area unicorn". This postmortem came later but we can see the hubris. But what we see described is little different from our experience today with Instacart or DoorDash. https://www.sfgate.com/food/article/rise-fall-bay-area-startup-webvan-19829522.php

When a successful founder and executive dismantles an idea so many people are chasing it lends credence that is carried forth through the media, other businesses, and more broadly. The experts spoke. It wasn’t just that they spoke, it was that those attempting to challenge the experts were ignorant at best or at worst simply defying basic laws of physics. The arguments against shipping weren’t about strategy or anything, but physics of miles, time, and gallons of fuel.

Then when companies failed—delivering anything from clothes to cookies—the told-you-so comments were everywhere. Those risk-taking founders were held up as examples of people that didn’t listen to experts. They didn’t “understand” the “real world”. They were naive and wasteful.

Of course, when any sort of creative effort succeeds those same people are innovators. They “defied” experts. They are the “rule breakers”. Then they are celebrated. They creatively destroyed something.

We collectively spend significant time and energy on the “victims” of creative destruction. The companies that failed to innovate and went under. The efforts those companies failed to make or technologies/processes they failed to incorporate or capitalize on. The people that poured their labor and personal capital into efforts that failed. Often these stories turn into blame ascribed to the new success or even the single individual.

In other words, what Schumpeter describes as an essential part of systems results in the demonization of those who try and even those who succeed. Even when someone is celebrated for success, it almost never takes long before they are torn down. As much as people generally love success there’s also a view that however the success is achieved, they must not have been playing fair.

Of course, and I can’t say this strongly enough, there’s nothing in being successful that itself grants immunity from vast transgressions as history has shown. There’s nothing about having entrepreneurial spirit that makes one virtuous.

But, and I also can’t state this strongly enough as well, there is a great deal required to creatively destroy something or to develop disruptive technologies. There’s a very strong requirement that one be able to absorb significant negativity, ignore a vast inbound criticism, and make decisions that indeed have unintended consequences.

There’s one other set of skills required that many find distasteful but are equally necessary. That is, one also needs to be strongly critical of the world—the product, the technology, institution, or process—that one wishes to displace and disrupt. Many find this unpleasant.

Examples of such inter-personal challenges are endless. Some are so classic and the success so great and often so long ago we forget the dynamic. It is worth an example of two to illustrate this and explain why this is all necessary.

When Steve Jobs was leading the introduction of the iPod (this is an iPod example not even an iPhone one) he famously made the initial launch all about the benefits of mobility and having your entire music collection in your pocket. We remember those benefits. At the same time, everything about the product was attacked and often from both sides. From the sound quality to the user interface to the selection of music in the store (2003) were criticized. The music industry which had done the $1/track deal came under fire from the musicians. Because it was the early 2000s and PDAs were popular tech reviewers balked at having a music-only device. Many insisted that mobile phones needed to be the music player. The list goes on and on.

Because Steve, as we all know, was a disruptor at heart going as far back as one can, he had already lived the PC revolution, the failure of Lisa, the rise of Macintosh, the return to Apple, and more. He was not only dismissive of these and other complaints, but he was outwardly aggressive about them. At D2 May 2004, asked about coming out with a phone in this context after the galactic success of the iPod, Steve famously said there was no way to innovate in phones because the “problem with a phone is that we’re not very good going through orifices to get to the end users.”

Image
"In the cell phone market you've got 5 orifices" Steve Jobs at D2, May 2004 with Walt Mossberg.

I was in the audience at that moment and can definitely say there was an audible gasp. Here we were at what we thought was the height of cellular/mobile phones with Nokia and Blackberry on top of the world and AT&T/Cingular on a hugest of huge rolls (not unlike today’s semiconductor rally on Wall Street) and Steve just trash talked the lot of them, and rudely so. The reaction of those companies was to be rude back. Industry experts jumped on CNBC and the WSJ with stock charts of Nokia, AT&T, Blackberry, and talked about PDAs more.

Fast forward to the iPhone launch—a lesson in his reality distortion field and knowing that when Steve trash-talked something he was about to trounce it—and again the entire first part of the launch is a literal takedown of the existing phone industry. He ranted about screen size, little keys, the stylus, weird software, and more. It was brutal.

Image
While this is a classic marketing positioning slide, don't let that fool you. The tone and tenor of the speech could only be described something between disdain and disgust.

The reaction to the launch was entirely predictable. I watched that reaction from the Windows Phone team (this is the old Windows Phone with the start menu and most models had a stylus and resistive touch screen). The iPhone was dismantled on technical and business grounds. It was too expensive. Touch won’t work. One carrier isn’t how the industry worked. The carriers not ATT/Cingular pointed out that there was no way for the unique carrier services that provided real value to do their job. Blackberry went all in on battery life, typing speed, and their BBM messaging service loved by folks in finance and legal. Apple and Steve retreated from those battles and did the work of execution they knew disruption required.

Image
Steve Jobs mocking the stylus that was so popular between Windows devices and Palm Pilots at the launch of the iPhone.

The first year was brutal. It looked a lot like the 1984 Mac. It was an incredibly innovative product that came with the expert consensus that it would be at best a niche. Then the first year of sale were not mind-blowing and the “market share” of Nokia and Blackberry—and importantly their stock prices—did just fine.

Image
Worldwide Smartphone Market Share. The iPhone launch was in mid 2007. Note how even two years later, in contrast to probably how most of us recall, the share of Nokia and RIM did evaporate (though Windows Phone might have).

Then came apps. Then came failed products that tried to thread a needle in responding. And so on. The rest is history. The creative destruction happened. RIM is what it is today. Windows Phone and Nokia was what they were. The people, teams, and ecosystems moved on. Without that destruction there can be no global smartphone revolution. The displacement of technologies, companies, and even individuals just had to happen. And those steps they took from the 1990s to 2008 or so also had to be taken. Without those innovations, those disruptions, there could have been no baseline to build today’s smartphone. Most reading this are too old to remember, but before mobile phones there were only land lines. Business communication meant an ATT calling card near the gate at the airport or spending an hour making dial up work after hijacking a hotel fax machine phone line. All that infrastructure, all those companies, all those processes were displaced.

So often when this story is told, we look back with nostalgia. We remember fondly the old way of doing things. We remember the glory of the introduction of the new way. We collectively often memory hole the absolute hell that everyone on all sides of a disruption go through. We forget:

  • Experts telling us all the new thing can’t possibly work.

  • Incumbents explaining all the problems with the new thing and the societal costs.

  • Industry leaders detailing the failings of the new approach.

  • Individuals working on the current products worried and defensive turning into anything from defiance to flight.

  • Ecosystems around the current product joining in this chorus while defending their own processes, contributions, and businesses.

  • Everything from public company investor relations to labor unions to trade associations going after the disruptor both the entity and the personalities associated with the disruption.

All of this is done at brutally personal levels. Some might think this is a product of the new online social network world, but above I tried to provide a glimpse of the negatives that flew around. The real costs. The real “attacks” for lack of a better word. There’s nothing that social networks, including X, do that wasn’t already happening. Yes, everyone can see it faster and sure people who aren’t in the nexus of the situation can see it happening and offer their opinions. But no one working on this suffers more because they are already suffering.

Image
From The iPhone User Experience: A First Look by Bruce Tognazzini was hired at Apple by Steve Jobs and Jef Raskin in 1978, where he remained for 14 years, founding the Apple Human Interface Group. https://www.asktog.com/columns/070iPhoneFirstLook.html

The responses or even the first volleys are also brutal and direct, as Steve Jobs was by seeding the fight with carriers. Why is that? This is what it means to lead and be at the tip of the spear of disruption. It isn’t enough to just introduce something even if one gets that far. One needs to build a team that shares the same vision and same spirit of breaking from the past to do something new. In Steve’s case and in most all cases, you must get a whole “system” to change.

Image
From The NY Times in 2009. Even almost 2 years later the incumbents without iPhone were both doing well and remained dismissive of the iPhone impact.

Many might think this isn’t necessary or shouldn’t be. But there is no other way. An agent of change is a lightening rad because the default action for every technology, process, business, or institution is to *not* change. The response to potential change is by default to resist change and to protect status quo and incumbency. Change is difficult and individually and institutionally perceived (and in some reality) costly. This is the root of everything from NIMBY to industry groups protecting the industry.

Many also suggest that no matter what a change might be or how significant, there should be a way to ease into a change. Go slow, they say. Compromise they say. This is perhaps at the root of the most polarizing debates. Everyone always wants a simpler and longer path to a change. Why change the user interface all the way, just fix some problems. Why not provide an option to do things the old way. Give us a period of migration and adjustment.

This really doesn’t work the way people think it should. The problem with half-measures is that they are, well, half measures. Taking baby steps all but guarantees the innovation will fail. Amazon famously ignored the experts around retail and refused to accept telephone orders (I had a friend from college who worked hourly staffing a phone center for people calling up trying to order books). The problem is obvious in hindsight which is building out a whole system and process to handle phone orders, phone returns, phone credit card charges, etc. All of those take time and effort away from building out e-commerce which was the goal. Importantly once they would take orders by phone there would almost never be a way to turn it off other than to wait until no people order that way. Think about how long it took Netflix to stop mailing out DVDs or how many people still have dialup internet surprisingly. The long tail not only costs a lot of money and time, but literally prevents disruptive innovations from being fully formed and focused.

Disruption is one of my recurring themes in “Hardcore Software” where I write about accounts going back to just literally using C++ to write Windows programs (“can’t work”) through building a suite of products (“need best of each category”) to introducing a server for Office (“no one wants a server for Office”) to Clippy (embarrassing) to resigning Office head to toe, and yes Windows 8.

One of my most significant and early lessons that helped me to see the personal cost to experts when it comes to what I think everyone might consider routine innovation is when we did the most mundane thing to the “setup program” for Office in the 1997. This was literally just the way files got copied from the floppies to the hard drive to run. Enterprises customized Office setup back then and to do that they would edit what amounted to our *internal* table of files to be copied, SETUP.INF. It was a crazy format fragile and inscrutable, but necessity mandated they figure it out.

We introduced a whole fancy new database (you might know this as MSI files in Windows today) to make it “easy”. In the process, though, the very people we were trying to help ended up hating, I mean hating, what we did. Why? Not because it wasn’t better. No, simply because it was totally different. We were victims of our success and by changing it we took away their skills. We make it harder for them to be successful. They didn’t pay to price of fragility down the road that we were worried about (stuff like “corrupt registry). They just had to learn a new thing. Time passed. Everyone adapted.

I could repeat this for a half dozen other things. Changing the user interface for hundreds of millions of people using Office over decades was an even bigger deal with a massive pushback. That worked. Changing the Start menu for Windows, not so much.

I can say with certainty, success or failure feel exactly the same, carry the same dynamics, and attempting or succeeding at being disruptive comes at an enormous personal cost even though it is incredibly difficult. It also isn’t automatic. For example, iPod or every dot com failure, there are untold other failures and more, but far fewer successes, but the pattern is always the same.

Not everything is a disruptive change and it can be pretty lame to label everything and anything (from a startup or anywhere) as such. Certainly just applying that label isn’t an excuse to be rude in public discourse. In other words, the actions of disruptive behavior need support the broader disruption.

Therefore, when someone or some organization is trying to do something new and dare say taking on the role of a disruptor, recognize that the pattern will always be for others that see the change as a risk to tear down what they are doing. The disruptors will be broadly critical of what they intend to replace as they marshal the teams, ecosystems, and resources required. There’s simply no other way.

It is worth remembering this reality when being critical, harsh, or personal while the change is being attempted and put in motion. Ask yourself if your reaction is coming from a place of defense, a glass house, or maybe just reacting to the style of a disruptor trying something bold or difficult.

It is called creative destruction or disruptive change for those reasons.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[222. Automating Processes with Software is HARD]]>https://hardcoresoftware.learningbyshipping.com/p/222-automating-processes-with-softwarehttps://hardcoresoftware.learningbyshipping.com/p/222-automating-processes-with-softwareThu, 03 Oct 2024 04:01:08 GMTSomething about automation that I believe is that it is *way* more difficult to do than most imagine. Most look at problems to automate from the outside—a process they find tiresome, slow, yet repetitious. They often don’t think of their own skills and tasks as easily automated. The reality is much more difficult.

Most people who have built an automation know, or at least come to know, just how fragile it is. It is fragile because steps are not followed. Tools and connections fail in unexpected ways. Or most critically because the inputs are not nearly as precise or complete as they need to be. And they come to know that addressing any of those is wildly complex. Ask any engineer that has designed their own CBCI, build, or deployment process and they will tell you “don’t touch it” unless I am around.

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts.

There’s a deep reality about any process—human or automated—that few seem to acknowledge. Most all automations are not defined by the standard case or the typical inputs but by exceptions. It is exceptions that drive all the complexity. Exceptions make for all the insanely difficult to understand rules. It is exceptions that make automations difficult, not the routine. And over time most all systems are about exception handling not routine.

The very first business system I built was an inventory system that made it easy to assemble an order for customers and even printed a shipping label. Then one day the business owner (my father) said “hey this customer is going to come by and pick up the order so don’t print a shipping label.” Yikes had to figure that out. Then one day another customer said “hey can we pay cash on delivery?” C.O.D. used to be a thing. Then I had to add that. Then there were different commission rates for the new sales person. And on and on. Oh and then one day UPS came out with an entirely different way to compute shipping costs. Pretty soon my perfectly automated system was not just a UI mess but all the logic was about handling different circumstances for running the business.

This is what defines all modern business systems and processes. Anyone who has ever sat in a business review meeting knows that all the interesting questions are not in the standard reported. Who has not lived through “this is our template” for review meetings only to be confronted with an entire discussion about what is not on the template? That’s because all the interesting questions are exceptions to the preconceived notion of what we need to know.

The best diagnosis for exception handling I can think of is to wait on line at the post office. If you’ve ever done that, you know the thought of “doesn’t anyone just want to mail a package” comes to mind. As it turns out the entire flow at the post office (or DMV or tax office) is about exception handling. No amount of software is going to get you out of there because it is piecing together a bunch of inputs and outputs that are outside the bounds of a system.

The ability to automate hinges not just on the ability to know the steps to take for predefined inputs, and not even the steps to take if some inputs are erroneous or incomplete, but what to do if you can’t even specify the inputs.

Almost all travel automation is about not having the inputs. Are dates flexible? Seating flexible? Specific airline flexible? I recently saw a great tweet about how “programming will go away when people can just specify the goal of the code and then the code will get created”. My first thought was “Software engineering and product management. You’ve invented software engineering and product management.”

This was the argument about compilers when they were invented. Many said “don’t worry now that we have a compiler we don’t have to worry about the [assembly] code.” Unfortunately for a very long time to use a compiler meant debugging the compiler. And for any system that matters knowing the output of a compiler continued to matter for decades. And of course knowing what code would be generated matters too (eg recursion is great in a textbook but not in practice).

No current automation example is as interesting (to me) as medical diagnosis. There are tons of claims about how we can automate diagnosis (and treatment protocols). The problem is medicine is incredibly uncertain as a baseline. But also patients are incredible uncertain. It isn’t just that people are not always complete in even saying their symptoms. For example, shoulder pain might be an injury from playing tennis the patient does 3 times a week and has been around for a couple of weeks. Or it might be metastatic cancer going undetected. That’s quite a range (real example by the way.) So they might not even mention it to a doc. Or if they did, no doctor is going to run and do a CT or even lab work based on shoulder pain. But sure enough there are case histories and thus training models that immediately leap to a worst case. This same holds even for routine preventative work. It is wildly common for routine lab work to fall out of range (most every CBC or Chem20 shows something out of range, defined as 2 SD). This could easily be a lab aberration, a recent exposure to an unknown allergen, dehydration from fasting, or forgot to fast, or ate ice create 14 hours ago. Sure enough a trained model will say If you think healthcare expenses are high now, imagine what happens when the worst case based on incomplete or missing information is used as a baseline.

In all cases this could easily be said that the opportunity is to automate because the reality is that “most” (however that is defined) people in roles are not very good at handling exceptions and software would indeed be better. There is some truth to that—for every story I could produce about a missed diagnosis someone will have one about an astute doctor that had seen a similar zebra earlier in career. It isn’t that simple though. In most routine automations the way exceptions are handled is by the escalation path of humans. The frustration is real in dealing with this—re-explaining your problem, finding the expert, and so on. But it might also be the best way to handle escalation.

Rightfully one might suggest that automation is the way to handle the front line, at least for now. The problem then becomes not just the error rate in that but that the tools are currently designed to always produce a fine sounding answer rather than say “I don’t know.” This works fine or even better than fine for generating on the fly words or images for basic use cases. But in practice that is because those uses do not have financial, physical, or societal costs. So even the most basic of agents and automation will be seen to require oversight.

I am a proponent of that because that is how AI should be thought of—as a tool for the right people to use. The AI safety world was in a rush to say AI safety requires that AI not be used or should be regulated in whole new ways. Except AI as a tool is regulated just as a PC or calculator or PDR is regulated—the regulation is not the tool but the person using the tool. The liability is with the person that deployed or employed the tool, not the tool itself. There are many edge cases here, such as when a tool was designed with malicious intent or negligence. Those are higher bars as they should be.

We can see this at work with how self-checkout is evolving. You’d think 15 years into the smart phone revolution most people could operate an order kiosk or self-checkout without help. That’s certainly what stores had hoped. But as these are rolling out you can see how these systems are now staffed by people there to handle the exception. Amazon Go will be surely seen ahead of its time, but those are now staffed full time and your order is checked on the way night. And special orders at McDonalds? Head to the counter :-) (I’m picky)

I’m super optimistic about automating things. I’ve spent a career on automation. But part of what that career taught me—from some examples like automating finances in excel, to document templates, to creating good looking charts in PowerPoint, scheduling in Outlook, workflows in SharePoint…all fairly basic tasks—is that the baseline is easy but highly unsatisfactory. The success in those tools was not because they could help people get started but because they could help people get finished.

Automation will come but the breakthrough is going to look a lot like product management showing up and spending a huge amount of energy on what the inputs to a system are and what it means to have and handle exceptions.

In the end, that might look a lot like programming…again.


My favorite example of automation is scheduling meetings. I never met an important big boss with multiple administrative assistants who didn't tell me scheduling meetings is easy and asked why they still need to have admins or why doesn't Outlook solve this. Oy vey.

The least empathetic managers I knew had no idea how hard it was to schedule meetings. There is literally no standard process. Everyone who ever told me it was standard ("I [big boss] always let people use my blocks of time MW 10-12") simply wasn't thinking about how the organization contorted around their rules and how every reschedule was a cascade of scrambling and frustration. Problem solved.

I will go on a limb and say this, much like travel planning and medical diagnosis, scheduling that most valuable asset, time, will be the very last task to be completely automated from end-user to completion in one step. They will be assisted, like the way the web totally changed travel planning but really didn't automate it as much as just augment it. These systems will be automated by an elaborate dialog of collecting and structuring inputs such that the output becomes deterministic and repeatable.

We thought we'd solved scheduling in Outlook when we invented "free/busy" time and even had the first "schedule over the internet" in 2000 I think. Fast forward two decades+ and there are now systems where one side does the up front work to structure a calendar and then just asks the other side to do the work of picking an open time with a web site. This is  not unlike what we first did as free/busy. I don't know about you, but I am so far trending 100% on meetings rescheduled where I pick the time off someone else's calendar without their specific input.

Automation is difficult even for seemingly simple things. It is highly likely that what will ultimately be automated by AI falls into two classes:

  • Totally different tasks than what we view as laborious or tedious today. In other words, not the first things that jump to mind.

  • Work that is open to be completely redefined to function in some new way.

My favorite example of the latter is how the arrival of IBM computing in the 60s and 70s totally changed the definition of accounting, inventory control, and business operations. Every process that was "computerized" ultimately looked nothing at all like what was going on under those green eyeshades in accounting. Much of the early internet (and still most bank and insurance) look like HTML front ends to mainframe 3270 screens. Those might eventually change, just not quickly. It might be that the "legacy" or "installed base" of many processes is such that the cost to change is too monumental. It might be that AI will bring more fundamental changes to whole countries that were basically not automated some old way or maybe the most recently automated countries—the way much of Africa only knew mobile phone computing or that East Asia is primarily cloud-based computing.

There's a huge opportunity for the US and WE to use AI as a chance to reconsider digital infrastructure.

That's a heavy lift.

—Steven Sinofsky

# # # # #

Thanks for reading Hardcore Software by Steven Sinofsky! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[221. Using AI for School is NOT Cheating]]>https://hardcoresoftware.learningbyshipping.com/p/221-using-ai-for-school-is-not-cheatinghttps://hardcoresoftware.learningbyshipping.com/p/221-using-ai-for-school-is-not-cheatingThu, 22 Aug 2024 21:15:08 GMT

In the latest call to act on the challenges and problems with AI in this story in The Atlantic, the author posits universities and colleges do not have a plan to deal with all the fraud created by students using generative AI:

But his vision must overcome a stark reality on college campuses. The first year of AI college ended in ruin, as students tested the technology’s limits and faculty were caught off guard. Cheating was widespread. Tools for identifying computer-written essays proved insufficient to the task. Academic-integrity boards realized they couldn’t fairly adjudicate uncertain cases: Students who used AI for legitimate reasons, or even just consulted grammar-checking software, were being labeled as cheats. So faculty asked their students not to use AI, or at least to say so when they did, and hoped that might be enough. It wasn’t.

While the story shows some professors who are using AI or trying to, by and large the issue is about “fraud” with students. Putting aside the obvious irony/craziness of faculty worried about student fraud in an era of widespread faculty fraud all the way to the level of university president, the issue is not new. Whether it was calculators, encyclopedias, “online databases”, or the internet itself the idea of new technologies being labeled as cheating or fraud is not new.

This is a personal story of getting caught on the cusp of technology change and concerns over fairness and cheating. It is about the first use of “word processors” in freshman English in 1983. This originally appeared on X.

When I was a college freshman in 1983 the DOS PC was exactly 2 years old. Cumulative to date IBM PC sales had just surpassed Apple ][ sales, with about 1.5 million units worldwide. The most common graduation gift my classmates had received was a Smith-Corona typewriter followed by a fancy TI calculator. During registration we received a punch card with our random 4-character USER ID (mine was TGUJ for use on the mainframe CORNELLA) along with a $100 credit for CPU time and disk space. As I described in "Hardcore Software" my father (for reasons I will never know) bought an Osborne computer for his business in 1981. For college I was sent off with a second one just to fix bugs in the inventory management program I wrote in dBase. No one else in my dorm had a computer.

A Smith-Corona like the ones most of my classmates had. I never owned a typewriter until the one described herein. (eBay)

A surprising thing happened during course registration which was a sampling of students were picked to be part of an "experiment" to take place in the required freshman writing course. Some students were told they could optionally join a class that would be using a "word processor" to write their papers. The goal was to evaluate whether students wrote better or worse if they used a word processor. Supposedly professors were tracking or measuring quality of rough drafts and final papers or something.

undefined
Roughly what the CORNELLA mainframe I used looked like. I only saw it once as it was located in a building out by the Ithaca, NY airport where there was power and cooling available. (Wikipedia)

I was randomly chosen for the "word processor" cohort. We went to a special orientation to see the Xerox word processors and had to shell out $10 to buy an 8" floppy. The word processors were dedicated to word processing (computers that only ran one program for word processing) and there were 4 in a tiny room in the basement with one printer. I had a moment of panic because I had simply assumed I would use WordStar on my Osborne and my dot matrix MX-80 printer. I was all set up in my dorm ready to go. The professor told me I had to go to the dean to get permission to use my Osborne. It might break the experiment.

The Osborne. The modem described is not pictured. That 5” screen was 24 lines by 52 characters. The floppies were 90K bytes.

These “word processors” were ridiculous. They were up a giant hill (like all computing resources), weird, and slow. Plus 8” floppy. The keyboards had all sorts of weird colored function keys that made no sense. WordStar was already native to me and my computer was in my dorm. I knew all the shortcuts.

A Xerox word processor. I think this is the model they had at the time. I only saw it once. Notice the 8” floppy (I still have my disc!) (Source: digibarn.com)

The dean and I had an interesting (and very short) conversation. First I was just terrified. I was 3 days into orientation and already meeting the dean. I was told I had to be in the word processor cohort because otherwise it would be "unfair" to other students in the typewriter cohorts. BUT he insisted I use a “letter quality” printer. In a panic I called home. We did not have one and was worried about the expense. I found a daisy wheel typewriter available from 47th St photo (they had a toll-free number to call to talk to a sales person and figure this all out) which I could order and get shipped to college. It connected to an Osborne over a parallel cable and I could hack WordStar to output to it and even got Bold working.

A bag from 47th Street Photo. Was the go-to store for everything electronic in the 80s (before B&H). It was kind of a miserable store to use but they knew and had everything. (personal)

The writing class was fine. Every week we had a paper due. I was just a normal freshman up late on Thursday finishing it. I’d hit save to my 5 1/4” 90K floppy and then ^KP to print. Like a Gatling gun or teletype (think opening scene of All The President's Men) my typewriter would spew out my 5 page paper and annoy my roommate and guys across the hall.

I have no idea how the “experiment” went for our class or what conclusions they drew about using a word processor.

Some of the first Macintosh computers were in the undergraduate library at Cornell. (Source: History of Computing at Cornell, https://www.cac.cornell.edu/about/pubs/History_Computing_Cornell_Rudan.pdf)

As it turns out, the results of the experiment wouldn’t matter at all. A funny thing happened over the traditionally long winter break in January 1984. The then #2 computer company ran a commercial on the Super Bowl about why 1984 would not be like 1984. When I got back to school there were about a dozen Macintoshes that had replaced VT-100 terminals in the main terminal room for engineering and maybe 100 for public use around a campus of 20,000 students due to the wonderful Dan'l Lewin who created this academic niche for Steve Jobs. And suddenly for about $2500 (same as Osborne) students could buy their own Macintosh. No one ever said again that using a computer was “unfair” or even worse cheating. As an aside, I had used the pre-release Macintosh in a sealed off room as part of my part time job as computer terminal operator, but that’s a different story. I also worked in a facility that had Windows, SGI, mainframe terminals, and a PDP-11.

This is what it was like to buy a Macintosh as a student in 1984. This is around the corner from the computer room I monitored on Friday nights that had a PDP, SGI, as wells one Mac and an IBM PC (eventually with Windows 1.0). Source: history of computing at Cornell.

As a lucky freshman with computer literacy the whole episode seems just silly. I never really thought about it.

Fast forward to the release of ChatGPT. Through a friend who is an active alumni (I am not) I tried to reach out to the people in charge of the freshman writing program—the same one I took. We wrote up a version of this story for them to see. We had two things we wanted them to think about:

  1. First, the technology is here and some set of students are already using it in high school so it makes a lot of sense for the writing program to make sure all students have access. This would improve things over the “experiment” we lived through.

  2. Second, without going through a process where all students are put on the same page about the technology, the pros/cons, the limitations, etc. along with the faculty, it is highly likely that some students will be accused of having an “unfair” advantage or worse “cheating” or something like that. I suggested that the use of ChatGPT in class as a tool would help the school arrive at use cases, policy, and rules so that students and faculty would all be on the same page. I was certain some, probably most, profs would “ban” the technology and some, small number, would embrace it.

I was offering to help connect them to the right people, acquire the needed materials and licenses, and so on.

It took a lot of back and forth over a month and they basically (well, literally) told me that they would not have anything to do with this sort of activity as they cannot “mandate” how faculty teach and so on. It was essentially a head in the sand.

Sure enough the following 18 months were numerous incidents and stories in the Cornell newspaper about the use of AI in classrooms, students doing dumb things, faculty doing dumb things, controversy, and so on. There student editorials and stories about using the technology in one class but not another. Faculty stories of “cheating” and of course stories of hallucinations, embarrassments, and inaccuracies in student work.

No one, not one single class, put up a fight about using a computer when I was in college. My letter-quality printer (that's what they were called) made it look like I used a typwriter. In fact, unbeknownst to my teachers I was using an "online service" to find "information" for papers I was writing (such as on "acid rain") because the Osborne had a modem and I could dial up at 300b to an online database that was included as a trial version. It beat slogging the 20% grade hill in 20° weather at night. I guess I was cheating then too. I started to use a Xerox Star to do my lab reports because I could print for free (Mac laser printing was $0.15 per page) in the computer science lab plus the word processor and drawing program were way better—I spent Friday nights helping classmates recover corrupt and thrashed MacWrite documents because the Mac was so flakey. My profs were blown away and never accused me of cheating, though that was because the actual chemistry in my lab reports was abysmal.

George R.R. Martin writes on DOS-based WordStar 4.0 software from the 1980s.
WordStar (this is a later version as the screen shots for the original are all too tiny and fuzzy).

This was all strikingly familiar to me not just because of my own freshman year of college. But also my freshman year of high school. That was the year the Texas Instruments TI-35 calculator came out and cost $29. People were getting them in droves. Using one was deemed cheating. By senior year physics a TI-35 was required. When our elementary school got encyclopedias they could only be used in the presence of the librarian to make sure we would understand the difference between a primary and secondary source.

History rhymes but sometimes it just plain repeats. Keeping technology from students while they are learning is completely absurd. Labeling it cheating is the worst way to do that.

]]>
<![CDATA[220. Are AI Expectations Too High or Misplaced?]]>https://hardcoresoftware.learningbyshipping.com/p/220-are-ai-expectations-too-highhttps://hardcoresoftware.learningbyshipping.com/p/220-are-ai-expectations-too-highSun, 16 Jun 2024 17:30:56 GMTLLMs and Search

The idea that the current state of the art LLMs would simply replace search might go down as one of most premature or potentially misguided strategy “blunders” in a long time. Check out this example below, though it is just one of an endless set of places where LLMs convincingly provide an incorrect answer.

A threads post from someone showing claiming they have replaced the use of search with ChatGPT. The replies went on to show numerous other episodes they received in response to the query as well as the correct answer which never seemed to be one of the responses.

Subscribe. It’s free and you won’t miss a post.

There was nothing about what chatGPT was doing at launch (or now) that would have indicated using it as a replacement for search would make sense, even though it had the look that it might. You can’t ask an oracle designed without a basis in facts, questions you don’t know the answer to and just take them at face value because “they sound good.” Since they sound so definitive and the chat interface will give a great sounding answer to most any legit question, it is left as an exercise to the prompter to then google for verification. Almost literally “The Media Equation” reality.

The Media Equation: How people treat computers, television, and new media like real people and places, a seminal book on the way humans and machines interact by Stanford professors Byron Reeves and Clifford Nass. They were both deeply involved in the original implementations of both Microsoft Bob and Office Clip-It.

That chat gives different answers to different people or at different times should immediately jump out as a feature of a different use case, not a bug in search.

The butterfly effect of this mistake is incredible. It directed so many efforts away from the amazing strengths of LLMs, made regulators take note of a technology that would “disrupt” google (so it must be important), and implied anything so powerful needed immediate regulation. As hallucinations (I seriously dislike the use of an anthropomorphism) became routine even more focus on regulation was needed because we obviously can’t have wrong answers on the internet. Fears of LLMs designing chemical weapons or instructing people how to do dangerous things, when it could send you on the wrong path to bake cookies were peak over-fitting of concerns. The rush to overtake Google with LLMs put the news and entertainment industries on high alert given their past experiences and with that a rash of copyright litigation.

At the same time it meant people were not focused on the creative and expressive power of the technology and doing what it did so uniquely well. Worse those seemed pedestrian use cases compared to the grand vision of replacing whole parts of the economy.

It reminds me of when Bing let “Sydney” out and it was even better than we could have hoped for creativity. But then the panic set in and now here we are. Sydney is gone. Bing is using OpenAI for search results. Google is in a panic and rushed to inject Gemini into search results.

It feels like when expert systems came around in the 1980s and the early use cases touted were simple things like curing cancer or diagnosing diseases, which were just life or death matters in what were already highly uncertain contexts. Just the wrong domain too soon for what was a novel and innovative technology. What are expert systems technologies up to these days?

Here’s a perspective from me from February 2023 at the time of GPT3 and the “release” of Sydney, AI, ChatGPT, and Bing…Oh My.

So weird we are years into this and still in the same spot, only much more distracted with a crazy focus on potential for things LLMs have not made a lot of progress at doing.

It might very well be the case (I certainly hope so, for example you.com is making progress all the time) that invention and innovation will move the technology forward and introduce fact-fullness to LLMs (at a reasonable cost to develop and consume). In that case it this will be just another case in tech history where early is cool, but turns out to be the wrong time/approach/tech.

It might be the case that more compute, more data, more training, and so on will break through. Smart people in the field though are doubtful that “more of the same” will lead to a breakthrough in fact-based answers.

On the other hand, until we have that invention and innovation it might just be that trying to use LLMs for search will be what causes the next AI winter, just as machine translation or expert systems caused previous winters. That would be a bummer. The expectations are so high for LLMs to disrupt Google, replace search, and reinvent research that anything less than that will be billed as disappointing.

I think it is legitimate to start asking if we’re pointed in the right place.

The following are two examples of a simple prompt someone sent me in reply to an original posting on X, showing how the latest ChatGPT and Perplexity, the latter is often cited as providing more truthful answers. In the results, which are presenting a rather convincing format, there are people who don’t exist and others that never worked on Windows, as well as features that seem arbitrarily elevated. I can guess that the names come from people who either wrote or spoke a good deal. The features might have been ones that generated support or other tech press chatter. I have no idea how security was rated.

This is a way of asking if LLMs today represent a potential induce another “AI winter” as a new technology that was broadly seen as taking us to the next general level of AI.

AI Winters

AI has progressed for decades through step-function innovation—big huge inventions followed by long periods of stasis. That’s different than the exponential curves that we think of with hardware, the more linear improvements in business software, or the hit-driven consumer world. There’s been a lot of long term research, so-called “AI winters,” and then “moments” of productization.

When one of these productization moments happens it is heralded at first as an advance of AI. Then almost in a blink no one thinks of those innovations as AI anymore. They simply “are.” The world resets around a new normal very quickly and what’s new is just cool but rarely has it been referred to as AI. This is the old saying “as soon as it works, it is no longer AI.”

We don’t think of the decades of research behind maps directions/routing, handwriting, spelling and grammar, image recognition, matching that happens everywhere from Airbnb to Bumble, or even more recent photo enhancements as AI as much as just “new features that work.”

This chart below from “The History of Artificial Intelligence” provides a view of the roller coaster that has been AI along with the major advances. The article is a short history of these major advances and worth a look.

I think in a short time we will look at features like summarization, rewriting, templates broadly, adding still photos or perhaps video clips, even whole draft documents as nothing more than new features and hardly a mention of AI, except perhaps marketing :-)

Why is AI like this? Kind of interesting. Is it that AI really is an ingredient technology and always gets surrounded by more domain/scenario code? Is it that AI itself is an enabler that has many implementations and points people in a direction? Is it because the technology is abstract enough that it defies clear articulation? When you compare it to other major technology waves, it just seems to keep happening. Maybe this time is a little different because so much focus has been placed on hardware, GPUs, TOPS, custom chips. I wonder if that will make a clearer demarcation of “new” but I’m not so sure.

A couple of slides from almost 10 years ago that Frank Chen and I made for a lunchtime talk. Winters and reality.

Another thing about the history of AI is how at each step of innovation there was a huge amount of extrapolation about all the things a specific advance will lead to. But things don’t play out that way.

This is a lot like medical research (and many innovations in AI were in medicine, like expert systems.) Each discovery of a gene or mechanism will lead to a cure, but the extrapolation proves faulty.

Innovations in AI have been enormous but that also haven’t generalized to the degree envisioned in the early days.

Perhaps, one way this could play out is that LLMs remain excellent as creative tools and generators of language, but do not make progress on truth or factualness. The chosen domains for applying LLM will remain where they are strong today and don’t generalize that much more.

The question worth discussing is just where are we on the journey to an AI Winter or AGI? Sure we’re in the middle of those extremes, but which way are we tilting?

—Steven

These were originally posted on X and edited here for completeness and clarity.

]]>
<![CDATA[219. On AI Requiring a New OS]]>https://hardcoresoftware.learningbyshipping.com/p/219-on-ai-requiring-a-new-oshttps://hardcoresoftware.learningbyshipping.com/p/219-on-ai-requiring-a-new-osThu, 06 Jun 2024 02:00:14 GMTMy friend @dsobeski asked:

dsobeski @dsobeski This is a question for one Steven Sinofsky ( @stevesi  ) - do you believe there is value in creating a new operating system for the upcoming age of AI and devices?

Q. Do I believe there is value in creating a new operating system for the upcoming age of AI and devices?

tl;dr A. It is too soon to know. Also it is important to define what an OS is and where the innovation happens in order to base the discussion technically.

Operating System Concepts 1st Edition Peterson and Silberschatz
My first OS book and the cannon of OS teachings AFAIAC.

An OS is really three things and conflating them makes this impossible to answer. Separating them one can at least articulate a go-forward point of view:

(1) Hardware interface and resource allocation. At the core an OS by any traditional measure is an interface to hardware and a mechanism for allocating and managing those resources. The “computer science” on this is well established and not since the advent of virtual memory have we really seen much of a change. I don’t think AI will warrant a new approach to this level of an OS. What would drive that is if one believes these primitives themselves will be rearchitected IN AI (as opposed to support AI). I don’t see the primitives of networking, files, tasks/processes, and so on as being ripe for rearchitecture in the face of what we call AI today. Can I imagine algorithms that are either derived from or utilize AI techniques to perform better? Yes absolutely. Can I imagine system primitives that are tuned to managing hardware abstractions required for AI? Absolutely.

(2) User interface to the OS itself. So much of the debate/dialog around an OS is about the user experience for a relatively tiny number of concepts, primarily: launching programs, files, and settings. It is almost certain these will undergo rethinking in light of AI, but this layer is well above the “OS” proper and rethinking these is essentially how iOS evolved (and also Linux). It is not difficult to imagine, for example, launching programs via voice or text versus pointing or tapping. In fact, many people reading this probably use search to find programs anyway. Much ado was made of Windows 8 and the start screen in this realm. So much so in fact that few even remember you could just start typing a program name at the Fisher-Price screen to launch a program (or search, URL, etc.) Nothing at all tells me that settings will undergo a change due to AI since the trajectory should be to keep having less and less hardware/resource management there anyway. Finally, files are disappearing pretty rapidly though you wouldn’t know it by how developers talk (since code is file-based). The file-less future is farther away since sharing and archiving is still very file focused and also because of photos and videos. Still, I don’t think changing these necessarily means a new OS until you get to the third point. I would also argue in the end the difference between iOS and Windows/macOS with respect to (2) is diminishingly small in this area.

(3) An API for developers to create programs. In many ways the most interesting part of “do we need a new OS” is really about what APIs a developer has access to and uses. This defines the way people truly experience an OS and also the economics. Today, if you take my conclusion in (2) as a given, the primary difference between “modern” and “legacy” in my world view is the level of abstraction and access to (1) and (2) that developers have. By virtue of up leveling or restricting (depending on PoV) apps moved up the stack. People have fights to the death over “do anything on MY computer” but the reality is computing is fundamentally vastly better because of the modern level of abstraction and control on “modern” platforms, even though from a classic view of “microcomputers” this is a set of restrictions. Everything from running at a lower trust mode to not being able to access and/or control/monopolize hardware from apps are aspects of this level of “go be creative elsewhere”. I would *argue* with AI we need much more of this, not less, because the risks of bad actors trying to exploit a computing endpoint increase.

Note, there is also the actual device and hardware as well as an ecosystem around that. I am going to skip that and assume today’s hardware (sensors, compute, graphics, etc.) continues to improve, get smaller, use less power, and more. I’m also specifically speaking to client-side operating systems since my assumption is that the era of server operating systems is both on a different trajectory and also the domain of concern for a smaller set of people.

Now the question is for (3) will that drive an entire new OS, or will it be layered on one or more existing (1)+(2) OS platforms? It will certainly be layered on top in the near term. We have data on this from the internet. The world figured out it could turn anything into an internet operating system (e.g. TV, mail machines, IoT, browser as an OS, etc.) but having the internet on an OS that also had robust versions of (1) and (2) mattered. That 7-8X the number of people in the world decided that phones/tablets were a more accessible way to use the internet speaks to that more than anything. Having a unique (3) meant everything to scale.

That would argue that a new OS can certainly be built and thrive. The question becomes what existing platform offers interesting and useful abstractions that do not get in the way of doing what developers want. The mobile OS platforms provided rich and abstracted internet capabilities along with many things like payments, security, privacy, and more (all lacking on legacy OS platforms). Will whatever it comes to mean for AI get delivered that same way or will the presence of those get in the way? It really depends on what it is that AI does a couple of years from now that it is not doing today.

We have evidence for this point as well with mobile itself. The idea of building a mobile device out of (3) for legacy platforms could *not* work *because* of the assumptions about the legacy device. The fundamental developer model that was in use simply conflicted with what could be offered as a modern OS. Once Apple decided that touch, privacy, app sandboxing, ultra long battery life, always connected, quality over time, and so on were the new value propositions there was no way to really back into those from (3) of Mac. While they could get close, Windows with its focus on x86 hardware, application, API, and interface compatibility was in no position to provide an offering. Please see hardcoresoftware.substack.com and the prologue.

So, does AI require a new OS? Not yet for sure. In 5 years will apps begin to emerge or better will a set of value propositions for apps emerge that require a new level of abstraction to fully deliver on a new promise of AI? Maybe. I don’t think I know what that would be now.

]]>