Innovation – ​Amar Bhidé https://bhide.net/wordpress_files Teaching and disseminating course on Transformational Advances Fri, 02 Jan 2026 21:53:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://bhide.net/wordpress_files/wp-content/uploads/2023/06/BhideSpring2022formalheadshot-cropped-small-150x150.jpg Innovation – ​Amar Bhidé https://bhide.net/wordpress_files 32 32 Defending “rapacious deregulation” (Barron’s op-ed) + Council for Transformative Enterprise https://bhide.net/wordpress_files/index.php/defending-rapacious-deregulation-barrons-op-ed-council-for-transformative-enterprise/ Fri, 02 Jan 2026 21:03:39 +0000 https://bhide.net/wordpress_files/?p=3447

It’s Time to Unleash the Public Markets

Amar Bhidé

Jan 02, 2026, 10:07 am EST

Here is a new Barron’s op-ed endorsing the SEC’s “rapacious deregulatory zeal.” Lest you think me a MAGA fanatic, my previous op-ed attacked the lawlessness of Trump’s tariffs. John Authers approvingly remembers my 2010 book arguing for tough banking rules as  “straightforward polemic.” 

Free-thinking, open-minded discourse is also at the heart of a “Council for Transformative Enterprise” that some colleagues and I are promoting. We are inspired by Ned Phelps’s now defunct Center on Capitalism and Society. (A couple of us were founding members of the Center and I started and edited its eponymous journal.)  However, our renewal project emphasizes progress in Science, Technology and Art, more strongly. Its aims also align closely with my course on transformative medical advances.

Here’s a one-page summary. If you would like to help organize the Council and participate in its activities, please email me.

And if anyone would consider a “naming” gift–or housing it in an existing Foundation – and shaping its mission  we’d be immensely grateful.

Wishing you all a happy new year! The world was less awful in 2025 than in 2024. May it continue this trajectory.

]]>
Ants, Poet, and the Romance of Progress https://bhide.net/wordpress_files/index.php/ants-poet-and-the-romance-of-progress/ Thu, 31 Jul 2025 14:41:33 +0000 https://bhide.net/wordpress_files/?p=3385 I turned the talks I gave at IMD, Lausanne and the Nova Medical School, Lisbon into a you-tube video. It narrates how I came to teach a course on transformative medical innovations and why I am now trying to start a center on progress. So, it’s kind of a nearly 70-year-old’s memoir squeezed into a half hour clip.
I squeeze in a brief commercial for an initiative on progress that I’m trying to start.
The video, cobbled together in my unprofessional home studio won’t have Hollywood knocking down my door, but it gets the job done I hope..
Plus it has pictures of Federer/Djokovich, my benefactors, my mother even..


]]>
A Cautiously optimistic view of LLMs (Project Syndicate oped) https://bhide.net/wordpress_files/index.php/a-cautiously-optimistic-view-of-llms-project-syndicate-oped/ Thu, 30 Jan 2025 19:41:26 +0000 https://bhide.net/wordpress_files/?p=3237 … After two deeply skeptical pieces in 2024 (which I stand by!)

 

]]>
Yes, I did promote my book and bash LLMs! https://bhide.net/wordpress_files/index.php/yes-i-did-promote-my-book-and-bash-llms/ Tue, 15 Oct 2024 15:38:33 +0000 https://bhide.net/wordpress_files/?p=3187 https://www.publichealth.columbia.edu/news/expert-entrepreneurship-tackles-health-care-innovation

An Expert on Entrepreneurship Tackles Health Care Innovation

October 7, 2024

For 35 years, Amar Bhidé taught entrepreneurship—at Harvard, Chicago, Tufts, and Columbia, where he was Lawrence D. Glaubinger Professor of Business. He has written dozens of case studies, synopses of real-world scenarios crafted to spur vigorous classroom conversation, books on entrepreneurship, innovation, and the financial system, and op-eds on public policy issues for the Wall Street Journal, the Financial Times, and The New York Times.

In recent years, he’s increasingly delved into foundational questions about the complex, dynamic advances in productive knowledge. “It’s not just science” he quips. “The steam engine did far more for the laws of thermodynamics than laws of thermodynamics did for the steam engine.”

In January 2024, Bhidé accepted an appointment as a professor in Columbia Mailman’s Department of Health Policy and Management. He teaches the course “Lessons from Transformational Advances,” which digs into a series of case histories Bhidé developed to probe the complex, protracted processes that produced life-altering drugs, devices, and practices.

One case describes how despite a long history and contemporary clinical promise—the widespread use of fecal microbiota transplant to treat gastrointestinal disease has been stymied by regulatory hurdles and provider resistance. The case on tamoxifen shows how tamoxifen became a gold-standard treatment for breast cancer—after failing as a contraceptive.

The overarching goal is to inspire, not just inform students about how new treatments and practices evolve. “The cases show how contributing to progress offers great scope for personal flourishing, whatever your role and whatever your financial reward may turn out to be,” says Bhidé.

Is there a core theme in your work?

Bhidé: I’ve gone from looking at things principally from a businessperson’s, an entrepreneur’s point of view, to trying to understand the overall process of how productive knowledge advances. But the core theme has been the human striving for change and betterment that cannot be reduced to an algorithmic formula.

How did you make the pivot to advances in medicine?

Bhidé: As it happens my mother was a pioneering cancer researcher, and my sister is an oncologist. But, with my general interest in productive knowledge, I could have written about anything—advances in computer science. I didn’t. I wrote about medical innovations. This was lucky. Health care is a broad arena but nonetheless has some common features.

What do you hope your students take from the case studies?

Bhidé: The process of practical advances is complicated, protracted, and involves a large cast of characters. There is instrumental and humanistic value in appreciating these processes: We could do things better in the future if we understand how past advances come about. They also teach us what makes us human.

In December, Oxford University Press will publish your fifth sole-authored book, Uncertainty and Enterprise: Venturing Beyond the Known(link is external and opens in a new window). How did it come about?

Bhidé: The book represents the culmination and synthesis of much of my writing and research. There are also many points of overlap with the seminar on transformational advances I’m currently teaching. The case studies have informed the book and the ideas that I’ve tried to distill in the book have informed how I’m teaching the course.

What are the foundational principles of Uncertainty and Enterprise?

Bhidé: We cannot or should not be sure of anything. We cannot be sure of what is or what was, and even less what could be or what should be. We can have only conjectures, provisional hypotheses that combine imagination and evidence. And inevitably, our conjectures diverge while much of our actions are interactive. We can’t act unilaterally. Imaginative yet grounded discourse plays a crucial role in aligning our conjectures.

Who is your target audience?

Bhidé: I want to persuade mainstream economists that there’s a broader way of looking at the world that—if they adopted it—could be beneficial to themselves and to society. A second target is the intellectually curious, possibly “highbrow,” general reader about the rewards, challenges, and reasonable ways of dealing with uncertainty that is so central to our lives, yet are often ignored in economics and decision theory. I don’t however want to pick a fight with mainstream economics or provide cookbook recipes to general readers.

This fall, Project Syndicate published your op-ed calling large learning models “mendacious talking horses(link is external and opens in a new window).” Another for Barron’s(link is external and opens in a new window) calls out the current AI investment craze a mania. What sparked your ire?

Bhidé: Writing my Uncertainty book has a lot to do with it. I tried to use LLMs to research, edit, and illustrate the book—it was a source of unending frustration, though “earlier” AI was invaluable. I also studied the evolution of AI for my book. AI grew out of a “fork” in the cognitive revolution of the 1950s and 1960s which conceived of the mind as a computer, often relying on statistical models to recognize patterns. A second fork treated the mind as a “meaning constructor” where meaning was highly contextual, historical, and cultural. Both forks have value.

Long before LLMs, statistical AI had proven its worth in many applications. But reducing all thought and speech to a mindless statistical model is absurd. Yet that’s what many LLMs try do. The LLM mania also ignores the protracted trial and error through which cost-effective AI applications have emerged over the last 70 years. The mania also shows how ignorance of how transformational technologies like AI evolve can become a social menace.

]]>
(More) Skeptical Remarks about AI https://bhide.net/wordpress_files/index.php/more-skeptical-remarks-about-ai/ Sat, 22 Jun 2024 11:39:38 +0000 https://bhide.net/wordpress_files/?p=2202 I made the remarks below to a CEO forum on June 21 2024. Generally the AI enthusiasts were over the top vocal. Skeptics were quiet but quietly supportive of my viewpoint

Suppose someone said that smartphones were on the cusp of generating widespread transformations.

You might reasonably ask, “Where have you been these last twenty years, Rip Van Winkle?”

Smartphone apps like Uber and Airbnb have revolutionized transport and travel. Mobile search and social media have crushed mainstream media and advertising.

Given how far we have already come, is it likely that smartphones are at an inflection point? Similarly, with AI. Its applications have already been transformational. Indeed, it is AI tools and techniques that make smartphones smart. Nearly every smartphone app – from texting to sexting, mapping to matchmaking, video editing to streaming, Uber ridesharing to Airbnb rentals – incorporates AI. When we speak to our phones asking for weather forecasts or driving directions, we engage AI’s Natural Language Processing capabilities.

Moreover, AI’s widespread use precedes and goes far beyond smartphones. A 1956 workshop at Dartmouth kicked off academic AI research. In the following decades, practical applications evolved. Starting in the 1970s, George Lucas’s Star Wars epics dazzled audiences with AI special effects and animations. ‘Fuzzy logic’ proposed by UC Berkeley’s AI guru, Lotfi Zadeh, in 1965, was used to control a Japanese subway in 1987. By 1990, Japanese consumer electronics companies were using fuzzy logic in camcorders, vacuum cleaners, room heaters, and air-conditioners.

In 2006 – a year before Apple’s iPhone – Oxford’s Nick Bostrom noted that cutting-edge AI had “filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”

Sixteen years later, the claim that AI has just reached a take-off stage. is perplexing. Merely maintaining historical growth rates from a high base should be a challenge.

Looking more closely at how AI became mainstream is instructive.

Traditional pre-AI software applications performed deterministic calculations. Payroll processing and optimizing complex operations were archetypal applications.

More often than not, however, uncertainties frustrate demonstrably correct solutions. Ambiguous information or incomplete knowledge makes calculating what’s truly best impossible. We must make do with guesses and approximations. Likewise, we often don’t use numbers or algebraic symbols to specify problems or discuss solutions. From everyday speech to Supreme Court deliberations, our discourse relies on ambiguous language – including analogies and metaphors.

Lofti’s 1965 “fuzzy logic” and natural language programming thus epitomize the more realistic aspirations of AI.

But how to combine the digital computer’s capacity to flawlessly manipulate 1s and 0s with the incompleteness and imprecision of human knowledge and discourse?

One early approach incorporated specialized expertise. Medical rules of thumb were a popular basis for the early expert systems. But, this approach was limited to problems where experts had codifiable knowledge.

 Other AI applications that used statistical approximations. Humans merely specified the data – text, images, not just numbers — from which computers inferred statistical patterns. No understanding of the underlying process or consideration of contextual meaning was necessary. The dictum, repeated endlessly in elementary statistics classes, that “correlation is not cause” was brushed aside. AI programs did not even have to be told which variables mattered or to what degree. They used data mining to calculate variable weights that best fit the observations.

AI programs used statistical correlations to mimic natural language. Actual natural language often requires reading minds — contextual interpretation of intent. The meaning of a simple ‘what!’ depends on context and tone. Going back to MIT’s Eliza, a 1960s-era psychotherapeutic chatterbot, AI programs used correlations as substitutes for any mindreading.

Statistical AI could also improve through trial and error. But again, this ‘machine learning’ did not require domain expertise, judgments about “lessons learned,” or understanding or consideration of context.

Nonetheless, the cost-effectiveness of statistical AI that did not require specialized expertise vastly broadened the scope of AI applications. Google’s search algorithm, which handily outperformed Yahoo’s human catalogers of the internet, was a striking example.

At the same time, AI hasn’t sailed smoothly in every sea. Belying dire predictions, AI did not dominate or displace human’ knowledge work.’ Knowledge-intensive jobs grew, and wages stayed high.

AI even failed to automate many tasks that don’t require much thinking or training. Going back to Apple’s much-ridiculed 1993 Newton, handwriting was supposed to replace typing. In 2001, Bill Gates predicted that pen-based tablets would become “the most popular form of PC sold in America” in five years. They didn’t come close. Now, finally, convertible PCs with pens and touch screens have found a market, but keyboards remain the dominant input device. AI-enabled handwriting and voice recognition remain frustratingly hit or miss. Similarly, we usually still prefer the precision and accuracy of clicking or tapping on a button to giving voice instructions to personal assistants (like Siri or Alexa).

Where has the accuracy of statistical AI been acceptable, and where has it not?

Accuracy often depends on the ambiguity of inputs and outputs. Printed words that use standard fonts are less ambiguous than idiosyncratically handwritten words. Unsurprisingly, Optical Character Recognition software scans printed books and documents far more accurately than handwriting recognition programs.

Ambiguous outputs similarly undermine machine learning. Unquestionably correct or wrong results have helped make face recognition highly accurate. In contrast, correctly deciphering spoken words (“there” or “their”?) requires knowing the speaker’s intent. But, statistical correlations cannot reliably discover intent just as they cannot establish cause.

Accuracy also depends on the stability and uniformity of the process that generates the data used by AI applications. Physical or physiological processes, governed by invariant laws of nature, are usually stable. In contrast, human behavior and choices are subject to the whimsical vagaries of social attitudes and the zeitgeist. Statistical predictions about creditworthiness or purchasing behavior can, therefore, be highly inaccurate.

Data produced by a uniform process provides a more reliable basis for statistical inference. For example, OCR algorithms scan text more accurately if trained with materials in the same language and script. Conversely, data shaped in diverse ways by different contextual factors – if the observations are likeunhappy families unhappy in their own way – can make statistical inferences practically useless.

Acceptable accuracy depends on the cost of mistakes — the stakes — and the price-performance of the alternatives. Nearly every ad that Google and Meta Platforms throw at me is utterly remote from my interests. But the stakes are low and even the wildly inaccurate targeting of algorithmic advertising beats the alternative of blind advertising.

In some creative applications of AI accuracy can be both unknowable and irrelevant. There are no correct special effects in Star Wars movies or animations in video games and cartoons. There is no objective benchmark for restoring old movie prints — who knows what the original looked like? But, automated AI restoration wins because it is much cheaper and faster than human restoration.

Turning to the current AI mania.

Ignorance of AI’s seven-decade history may explain some over-the-top predictions about its future. But even some savvy techies who are aware of what came before assert that Large Language Models – often now conflated with all of AI – are game changers. A veteran software entrepreneur believes AI is still in its “early infancy.” He argues that “earlier incarnations, such as protein folding and chess playing, were esoteric and of little relevance to the general public. The chat interface to LLMs has suddenly made AI accessible to the wider public. New ideas and applications are exploding. The real creativity is coming from people using it and suggesting new uses, rather than from the engineers creating it.”

 I believe it is fair to say that before LLMs, most people were passive consumers, often unaware of the AI in their mobile phones, search engines, and social media. Certainly, LLMs have an arresting capacity for seemingly intelligent, natural language conversations with non-technical users, and they offer to automate several analytical and creative tasks. Could these abilities make LLMs a “killer app” for AI to an even greater degree than the AI that has long been embedded in smartphones?

The analogy with spreadsheets is seductive. Spreadsheets had simple user interfaces that allowed people with limited technical expertise to build useful programs. Running on cheap personal computers, they offered compelling value in many applications that did not require the power of mainframes. Symbiotically, they helped expand the personal computer market, prompting investments in better computers.

LLMs have even simpler and more natural user interfaces than spreadsheets. Yet underneath their hoods, LLMs run statistical engines with the same statistical issues that delineated the practical scope of earlier AI applications. As with earlier AI, LLMs can shine in creative applications, such as image generation, where accuracy is irrelevant. Conversely, as with other statistical AI models, ambiguous inputs and outcomes derail their reliability and limit self-corrective learning. They can trip over data that is not generated by a stable process or is highly dependent on context.

Relying on statistical correlations rather than deductive logic or math, LLMs have offered bizarre solutions to reasoning problems, highlighting, for example, the risks of being attacked by a cabbage while rowing across a river. The Khan Academy’s AI tutor for kids, struggles with elementary math. (It miscalculated subtraction problems such as 343 minus 17, couldn’t consistently round answers or calculate square roots, and typically didn’t correct mistakes when asked to double-check its solutions.)

Throwing every possible kind of data into LLMs’ training pots does not improve accuracy and reliability. Medical data does not make responses to legal or engineering questions any better. Training on Swahili literature does not sharpen statistical summaries of Shakespeare’s plays. Bulking up LLMs with disparate data so that LLMs can answer every question under the sun may increase their propensity to fantasize or hallucinate.

Spreadsheets, in contrast, didn’t overpromise and underdeliver. They didn’t tell jokes or write essays, but for their more targeted functions, they followed the user’s instructions precisely and correctly.

The chatty user-friendliness of LLMs isn’t a free lunch. It may well be a significant limitation. Yes, users need less knowledge of input rules and conventions than in their interactions with a spreadsheet, traditional search engine, or photo editor. But free-form inputs are also more ambiguous. Natural language prompts are more likely to evoke inaccurate or useless responses than traditional keyword searches.

In low-risk uses people will tolerate LLM mistakes for convenience as they do with autocomplete howlers in their text messages. The multi-trillion-dollar question is whether the benefits from low-stakes uses can  cover the costs.

One important reason for the nearly immediate popularity of spreadsheets (besides their ease of use) was that they ran on personal computers and not expensive mainframes. Similarly, Uber and Airbnb apps provided cheap, reliable alternatives to taxis and hotels through smartphones that users already owned. In contrast, LLMs require users to purchase more expensive hardware. Moreover, user hardware accounts for a fraction of the costs of building, training, and operating LLMs. For now, and as in the 1999 internet bubble, manic investors are willing to subsidize uneconomic uses. What happens when the music stops?

At best, LLMs are akin to a new high-powered automobile engine that can win car races but makes too much noise and guzzles too much gas for street use. The hype notwithstanding, LLMs aren’t like Nikola Tesla’s alternating current inventions that drastically changed the economics of electrification. Why then gamble on the transformative acceleration of AI and ignore so many other possibilities for innovation and operational improvements the world offers?

]]>
The New Emperor’s Old Clothes (Project Syndicate op-ed) https://bhide.net/wordpress_files/index.php/the-new-emperors-old-clothes-project-syndicate-op-ed/ Fri, 19 Apr 2024 19:49:41 +0000 https://bhide.net/wordpress_files/?p=2159 My Skeptical View of the AI Frenzy

After nearly two years of focusing on book writing, I returned to an oped, to eject a bee that had been buzzing in bonnet. Published in Project Syndicate, the text is below..

The Boring Truth About AI

To think that artificial intelligence is advancing at warp speed and creating existential risks to humanity is to confuse a mania with useful progress. The technology is less like nuclear weapons than like many other slowly evolving technologies that have come before, from telephony to vaccines.

Experts who warn that artificial intelligence poses catastrophic risks on par with nuclear annihilation ignore the gradual, diffused nature of technological development. As I argued in my 2008 book, The Venturesome Economy, transformative technologies – from steam engines, airplanes, computers, mobile telephony, and the internet to antibiotics and mRNA vaccines – evolve through a protracted, massively multiplayer game that defies top-down command and control.

Joseph Schumpeter’s “gales of creative destruction” and more recent theories trumpeting disruptive breakthroughs are misleading. As economic historian Nathan Rosenberg and many others have shown, transformative technologies do not suddenly appear out of the blue. Instead, meaningful advances require discovering and gradually overcoming many unanticipated problems.

New technologies introduce new risks. Invariably, military applications develop alongside commercial and civilian uses. Airplanes and motorized ground vehicles have been deployed in conflicts since World War I, and personal computers and mobile communication are indispensable for modern warfare. Yet life goes on. Technologically advanced societies have developed legal, political, and law-enforcement mechanisms to contain the conflicts and criminality that technological advances enable. Case-by-case court judgments are crucial in the United States and other common-law countries. These mechanisms – like the technologies themselves – are evolutionary and adaptive. They produce pragmatic solutions, not visionary constructs.

The Manhattan Project, which developed the atomic bomb and helped end World War II, was an exception. It had a high-priority military mandate. With the Nazis seeking to develop a bomb of their own, speed and effective leadership were essential. And as all-out thermonuclear war became a real threat, statecraft and strategic deterrence helped avert doomsday. 

But nuclear weapons are a misleading analogy for AI, which has followed the typically diffused, halting pattern of most other technological transformations. AI spans disparate techniques – such as machine learning, pattern recognition, and natural language processing – and has wide-ranging applications. Their common feature is mainly aspirational – to go beyond mere calculation to more speculative yet useful inferences and interpretations.

Unlike the Manhattan Project, which proceeded at breakneck speed, AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic-braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses.

Yet AI advances have been gradual and uncertain. IBM’s Deep Blue famously beat world chess champion Garry Kasparov in 1997 – 40 years after an IBM researcher first wrote a chess-playing program. And though Deep Blue’s successor, Watson, won $1 million by beating the reigning Jeopardy! champions in 2011, it was a commercial failure. In 2022, IBM sold off Watson Health for a fraction of the billions it had invested. Microsoft’s intelligent assistant, Clippy, became an object of ridicule. And after years of development, autocompleted texts continue to produce embarrassing results.

Machine learning – essentially a souped-up statistical procedure that many AI programs depend on – requires reliable feedback. But good feedback demands unambiguous outcomes produced by a stable process. Ambiguous human intentions, impulsiveness, and creativity undermine statistical learning and thus limit the useful scope of AI. While AI software flawlessly recognizes my face at airports, it cannot accurately comprehend the nuances of my carefully and slowly spoken words. The inaccuracy of 16 generations of professional dictation software (I bought the first in 1997) has repeatedly frustrated me.

Large language models (LLMs), which have become the public face of AI, are not technological discontinuities that magically transcend the limitations of machine learning. Claims that AI is advancing at warp speed confuse a mania with useful progress. I became an enthusiastic user of AI-enabled search back in the 1990s. I thus had high hopes when I signed up for ChatGPT’s public beta in December 2022. But my hopes that it, or some other LLM, would help with a book I was writing were dashed. While the LLMs responded in comprehensible sentences to questions posed in natural language, their convincing-sounding answers were often make-believe.

Thus, whereas I found my 1990s Google searches to be invaluable timesavers, checking the accuracy of LLM responses made them productivity killers. Relying on them to help edit and illustrate my manuscript was also a waste of time. These experiences make me shudder to think about the buggy LLM-generated software being unleashed on the world. That said, LLM fantasies may be valuable adjuncts for storytelling and other entertainment products. Perhaps LLM chatbots can increase profits by providing cheap, if maddening, customer service. Someday, a breakthrough may dramatically increase the technology’s useful scope. For now, though, these oft-mendacious talking horses warrant neither euphoria nor panic about “existential risks to humanity.” Best keep calm and let the traditional decentralized evolution of technology, laws, and regulations carry on.

Amar Bhidé, Professor of Health Policy at Columbia University’s Mailman School of Public Health, is author of the forthcoming Uncertainty and Enterprise: Venturing Beyond the Known (Oxford University Press).



]]>
High and low-potential applications for LLMs https://bhide.net/wordpress_files/index.php/high-and-low-potential-applications-for-llms/ Thu, 05 Oct 2023 17:07:38 +0000 https://bhide.net/wordpress_files/?p=2045

It bears repeating LLMs are not the same as all of AI. LLMs are a few years old, AI goes back decades. But putting that aside: Here’s a little schematic for where LLMs are in principle – and totally dependent on the dynamic behavior of costs – likely to work and fail.

My schematic has two dimensions:

1.       Stability of the underlying phenomena. If I wanted to be geeky, I’d call “stationarity” or “do things change much?” (If they do trainability becomes a big problem)

2.       Accuracy — what’s the consequence of getting things wrong?

I’d argue that “stability” improves the potential, while the need for “accuracy” reduces it.

You could apply the same schema for offshoring (remember all the breathless reports form McKinsey and various academics that some large portion of jobs were going to be lost to offshoring?) with the possibly difference that entertainment applications, where accuracy requirements are low can be more easily done by LLMs than by low-wage off shored labor (but even there apparently, a fair bit of Walt-Disney style animation did go offshore.

Whether LLMs can beat offshoring on costs remains an open question, I think.

]]>
Will frivolity sustain the development of the new AI? https://bhide.net/wordpress_files/index.php/will-frivolity-sustain-the-development-of-the-new-ai/ Sun, 26 Mar 2023 13:04:05 +0000 https://bhide.net/?p=1233 Google’s Bard has been rather a disappointment for me so far.  Several days after promising to try to edit my book draft (uploaded to my Google drive) it does not even seem to have tried. I have also asked it to find current examples of ‘bootstrapped’ (self-financed) new businesses and its answers were entirely useless.  Chat-GPT’s responses projected more confidence but were equally wrong.

I have been using Google Assistant for several years, and it too cannot answer more than the simplest queries.

Will more training make these chatbots more reliable or useful?
I am skeptical.  Think of the “support” personnel who are supposed to help you resolve problems with high-tech products, travel arrangements etc.  They are equipped with “scripts” and presumably human intelligence, but they simply cannot deal with anything out of the ordinary.  Yet I (and presumably others) usually contact them with problems that we cannot resolve after considerable google searches on our own.

I am likewise terrified to read that ‘bots’ are now writing software code, that may end up as components of important applications like air traffic control systems. Rather like poorly supervised labs in Wuhan playing around with viruses.

Yet the capacity of the new AI to make up stuff on the fly, e.g. write ballads, stories, etc. and create images is surely breathtaking.

So perhaps the continued evolution and refinement of AI proceeds through fun and entertainment.  This is a time-honored pathway.  Some new technologies do indeed start off solving practical problems – the first steam engines, first commercialized by Newcomen and then greatly improved by Watt, were used to pump water out of coal mines.  But many new technologies are initially developed by and for hobbyists: bicycles, gliders (that laid the foundation of powered flight) and automobiles for example.  VCRs, I am told, found a foothold through X-rated titles. Fanatical ‘gamers’ have supported the development of advances in computer graphics and image processing.

Similarly, the harmless, frivolous story-and joke telling capabilities, rather than “serious” – and risky — business or technical applications are how the technology advances. But can these capabilities justify the enormous investment? We shall see.         

]]>
Sunlighting Knightian Uncertainty/ Commemorating McArthur (New Working Paper) https://bhide.net/wordpress_files/index.php/sunlighting-knightian-uncertainty-commemorating-mcarthur/ https://bhide.net/wordpress_files/index.php/sunlighting-knightian-uncertainty-commemorating-mcarthur/#respond Fri, 25 Jun 2021 15:45:31 +0000 https://bhide.net/?p=1106

This is my last month at HBS and I thought I’d squeeze in one last working paper, Renewing Knightian Uncertainty (just posted on SSRN). The real meat, if you can call it that, should be in the next two parts. I had first thought of writing a great big long paper. But I realized smaller pieces might be more digestible.

And here’s the back story for my quixotic enterprise.

One hundred years after its publication, Frank Knight’s Risk Uncertainty and Profit remains in print. But although a select group calls it a classic, very few economists now even read the book, much less use it in their courses or research.

Knight himself carries considerable blame. The book’s thesis is true by definition – and therefore cannot be verified or falsified. More importantly Knight provides no direction for extension or modification and thus no help to scholars who must produce ‘normal’ research guided by a common paradigm. Knight himself made no further attempt to build on the thesis of his book, which was based on his PhD dissertation at Cornell and published when he was an assistant professor at Iowa. Rather, Knight would go on to become an eminence grise at the University of Chicago through his extensive writing on a range of topics which had little to do with uncertainty or profit and his leadership of a world-class economics department.

And the writing is truly terrible, possibly reflecting a “rural education” (as Angus Burgin put it).

I had never heard of Knight or his book in my graduate courses at Harvard’s business school and economics department.  Even Richard Caves’s wide ranging survey of Industrial Organization that I took in 1985 made no mention. Then, a couple of years or so after I joined Harvard Business School’s faculty to teach entrepreneurship in 1988, Dean John H. McArthur summoned me to a nearly three-hour lunch at his corner table in the faculty club. We talked about everything — except why we were having lunch. At the end he said something like: “perhaps you’d like to know why I asked you to lunch. Well I’ve been reading your stuff and I wanted to put a face to the writing, to know who this person was who was writing this stuff.”

A few days later a 1971 edition copy of Knight’s book arrived in interoffice mail with one of John’s classic handwritten notes, which went something along the following lines. “I think this will suit the way you think of the world.”

It more than suited. Knightian uncertainty and the ‘judgments’ and ‘opinions’ it impels became a lodestar for nearly all my research and writing. I included the terms wherever I could, sometimes in titles and often into the texts.  I even managed to sneak in Knightian uncertainty and judgment into an article on corporate governance published in the Journal of Financial Economics. The editor and referee didn’t want it, but I thought it was crucial, and they let it go — towards the end of the article.

I made Knightian uncertainty the organizing principle for my 2000 book on entrepreneurship. The book’s stories made it a success — as academic books go — but the conceptual framing was utterly ignored.

I decided to take a crack at just the conceptual part in a stand-alone article, where discretion (cowardice?) being the better part of valor, I excluded any mention of Knightian uncertainty. But even that I could not publish anywhere. In 2006, after securing a grant from the Kauffman Foundation, I managed to start, Capitalism and Society, which by a remarkable coincidence included it in its first issue. Bob Solow who wrote a commentary, also published in the same issue clearly saw the Knight connection. Regardless, that piece went nowhere.  (The journal flourished for about 12 years; what happened to it then is a sad story).

 Being stubborn, I am going to take one last crack at bringing Knight out of the shadows. This just published working paper sketches out where I am going with this. By the end of the summer, I hope to flesh out the argument, and perhaps, in another year, produce a university press monograph. If nothing else, this will serve as a tribute to John McArthur and Alfred Chandler (whose ideas I will include in the second part of my article and monograph). 

This is also the last thing I will put out under a Harvard Business School ‘label.’ The first, published 42 years ago, was a case study on the Republic of Ireland that I wrote for the late Bruce Scott (who coincidentally co-authored a book on France with John McArthur.) 

]]>
https://bhide.net/wordpress_files/index.php/sunlighting-knightian-uncertainty-commemorating-mcarthur/feed/ 0
Celebrating Richard “Dick” Nelson https://bhide.net/wordpress_files/index.php/celebrating-richard-dick-nelson/ Tue, 15 Jun 2021 23:03:29 +0000 https://bhide.net/?p=1087 Gave me joy to write this nomination

In nominating Richard Nelson, I feel both honored and nervous. He has secured so many glittering accolades and well-deserved tributes that it is difficult to say anything that has not already been said. I will therefore provide a personal perspective; and, what stands out most for me, is the versatility – the range of Nelson’s contributions.

1. The decades over which Nelson has been active provides an obvious, temporal marker of his exceptional range. ‘A Theory of the Low Level Equilibrium Trap in Developing Countries,’ considered his first “landmark” publication, was published in the American Economic Review in 1956. This was a year before Nelson had started his first faculty job at Oberlin College. Three years later, in 1959 another landmark — The Simple Economics of Basic Scientific Research — A Theoretical Analysis, was published in Journal of Political Economy. This paper formalized the externality problem of R&D: Nelson’s model predicted that profit-seeking firms would underinvest in basic science because they would not be able to fully capture the returns from such investment.

 The output never ceased. In every succeeding decade, Nelson published as much significant work as most scholars would hope to produce in their entire careers. More than sixty years later, Nelson now aged 91, is still producing scholarly work. The October 2020 issue of Industrial and Corporate Change has a sole-authored article by Nelson.  

2. Another striking feature of the Nelson range, is the variety of the methodologies he used and promoted. His first “round” of landmark papers elegantly and creatively applied classic theoretical methodologies and equilibrium models. These include, besides the two papers above, Investments in Humans, Technological Diffusion, and Economic Growth, with Edmund S. Phelps, published in the American Economic Review in 1966. The Nobel committee described the significance of this paper (when it awarded the 2006 Prize to Phelps) thus:

“The Nelson-Phelps analysis focuses on the stock – as opposed to the accumulation – of human capital as a key factor behind technology growth in the sense that an educated and knowledgeable workforce is better able to adopt available new technology. In the empirical growth literature, the Nelson-Phelps setting has provided a means of formalizing technological catch-up across countries, whereby technologically less advanced countries adopt the technologies of the more advanced ones, and do so more efficiently, the more educated the workforce in the adopting country. The analysis thus explains why data indicate that output growth is more related to the stock of human capital than to its rate of growth. The model also helps explain why skill premia – the higher wage rates enjoyed by skilled, or educated, workers – tend to be higher in times of rapid technological change: an educated workforce is able to assimilate technological advances more rapidly. Such reasoning has been used to interpret the recent increase in returns to education that has taken place in many countries, in particular the US. To the extent that there are important spillovers (“externalities”) in the adoption of technologies, the returns to education might not be fully reflected in the skill premium. Thus, Nelson and Phelps argued, their model suggests a possible reason for subsidizing education.”

With his reputation as a leading “equilibrium” theorist established in the 1960s, Nelson turned to the “Carnegie school’s” heterodox use of “routines” as a foundation of organizational decision-making. This new direction resulted in the now classic book, written with Sidney Winter, entitled An Evolutionary Theory of Economic Change. The theory in this book provided an “evolutionary” account (rather than an “equilibrium” model) of how firms and industries acquire their capabilities. The account is evolutionary in that it emphasizes the trial-and-error process of development and in the role of markets in selecting out the firms that do not develop the capabilities needed to survive.

The book which was published in 1982 had received approximately 23,000 Google scholar citations by May 2012. Now, nine years later, it has an astounding 47,729 citations. The book and Nelson’s other publications have also made evolutionary economics a serious alternative to the standard model, which is nearly miraculous in my view.

Case histories and studies of specific industries are a third and unusual feature of Nelson’s methodological range. He started these when he was still developing mainstream equilibrium models: In 1962 for example he published the results of his investigations on the development of transistors and other such technologies, and in 1977 Nelson and Winter showed how innovation and its institutional and organizational enablers differed across agriculture, medical, and aircraft sectors. In recent years, Nelson and his collaborators have produced a series of case histories on notable medical innovations.

The combination of producing case-histories as well path-breaking theory is highly unusual for a leading economist. Economic historians, such as Mokyr and Rosenberg apart, most economists avoid case-histories; and those who do work on case histories rarely produce path-breaking theory (be it orthodox or heterodox). And although Nelson’s case histories have not received the same acclaim and citations as his theoretical work, I have personally found them an invaluable resource in writing my own case histories of medical innovation. That a top theorist also produces case histories is an inspiring example, which I hope will be more widely followed in the years to come.

3. The range of audiences Nelson has influenced over his long career is also remarkable. As the Nobel announcement (for Phelps’s 2006 prize) quoted above indicates the early work has had an enduring influence on mainstream economists. The subsequent “evolutionary work” has inspired, as mentioned, research on the “heterodox” side. This two-sided influence alone puts Nelson in a select band of scholars. What is even more exceptional is his influence outside economics departments, heterodox or otherwise.

Nelson’s work has shaped research in strategy departments of business schools at least as much as it has research in economics departments. According to Johann Murmann’s entry in the Palgrave Encyclopedia of Strategic Management, “Nelson’s long-term research on innovation (e.g. Nelson, Mowery and Fagerberg, 2006), intellectual property rights (e.g. Levin et al., 1987) and the larger institutional environment (e.g. Nelson, 1993; Nelson and Sampat, 2001) in which innovative activity takes place has also been very influential in strategic management because his ideas help explain how firms gain and lose competitive advantage.” The influence, continues Murmann “is based in large measure on being able to construct a theoretical explanation for how firms are able to develop the capabilities to organize the often exceedingly complex research, development and production processes that characterize modern economies (Dosi, Nelson and Winter, 2000) where increasingly sophisticated products and services sweep away old ones (Nelson and Winter, 1977).”

Alongside his wide use of methodologies, Nelson has reached – and in some ways created — an audience for methodological writing. He has been a tireless explainer and exponent and not just a producer of evolutionary economics. A little more surprisingly – but in keeping with his own example, he has written about the scholarly advantages for “appreciative case histories.” And as an “evolutionary” critic of the “physics envy” of some economists, he has written about the diversity of approaches seen across the different branches of the natural sciences.

Public policy has been another important arena for Nelson’s long-lived impact. From the very beginning of his career, he has aimed to make a difference to people’s lives, outside the ivory tower. The scholarly research on technological innovation already mentioned above reflects this aim, as does his more recent research on the varied roles of government which emphasize the practical reality, namely that particularly in modern economies the roles of the public and private sector are inevitably intertwined. Nelson has produced thoughtful social commentary alongside more technical policy-oriented work. One celebrated monograph The Moon and the Ghetto, based on the Fels Lectures on Public Policy Analysis first published in 1977 and “revisited” by Nelson in 2011 provides a striking example. Its main thesis pertains to the about the unevenness of progress – why a moon-landing was more easily accomplished than treating the problems of urban ghettos. It also raises important, yet difficult questions such as the role of policy analysts vis-à-vis elected officials. In principle, Nelson’s book points out, analysts merely evaluate alternative means for ends specified by elected officials. In practice, the analyst cannot avoid choices of ends. To me this raises profound questions about legitimacy and governance which are clearly playing out in the current Covid crisis.

Nelson’s social commentary has addressed an intellectual rather than a mass audience. While this has not made his a household name, it has allowed for nuance and sophistication. Notably Nelson has written about the many ways in which the state influences technological development, but he has also criticized what he calls “techno-fetishism” and “techno-nationalism.” And because he is difficult to pigeonhole, Nelson’s voice is all the more respected and effective.

    **** *

The wide and multiple ranges of Nelson’s have most emphatically not made him an intellectual dilettante. There is an unusual coherence to his work across these many decades, methodologies, and audiences. Almost everything relates to a lifelong interest in long run economic change enabled by technological advances and the complementary development of economic institutions. This interest provides the thread that connects the theoretical models of the 1959 Journal of Political Economy paper on under-investment in R&D, his 1966 paper with Phelps on the nexus of education and the adoption of new technologies, the role of routines in generating and selecting innovations in his 1982 book with Winter, articles on intellectual property rules, and the social commentary in The Moon and the Ghetto. This coherence makes the Nelson corpus much greater than the sum of its parts.

Similarly, selfless contributions to the work of other scholars and communities makes his life work far greater than the sum of his own publications. These contributions, which have been both public and formal as well as private and informal started early: In 1962, when Nelson was on the staff of the President Kennedy’s Council of Economic Advisors, he helped convene and organize a conference on The Rate and Direction of Inventive Activity. Sixty years later the National Bureau of Economic Research held a conference to commemorate the 1962 conference. Similarly, in 1993, Nelson’s leadership produced another celebrated conference on National Innovation System. Limitations of space make it impossible to list the significant contributions of Nelson’s students and other scholars he has privately influenced. I personally could not be more thankful for Nelson’s diligent reading and constructive suggestions of nearly everything I have written since we first met 21 years ago – and I am one of the many who have thus benefited.

]]>