home bbs files messages ]

Just a sample of the Echomail archive

Cooperative anarchy at its finest, still active today. Darkrealms is the Zone 1 Hub.

   CONSPRCY      How big is your tinfoil hat?      2,445 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 2,358 of 2,445   
   Mike Powell to All   
   "The height of nonsense"   
   08 Feb 26 11:26:56   
   
   TZUTC: -0500   
   MSGID: 2116.consprcy@1:2320/105 2dedf794   
   PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0   
   BBSID: CAPCITY2   
   CHRS: ASCII 1   
   FORMAT: flowed   
   "The height of nonsense": Oracle co-founder Larry Ellison's 1987 argument   
   that not everything should be AI makes perfect sense in 2026   
      
   By Wayne Williams published 4 hours ago   
      
   A forgotten AI debate from 38 years ago feels uncomfortably relevant today   
      
   In 1987, long before artificial intelligence became the mass-market obsession   
   it is today, Computerworld convened a roundtable to discuss what was then a new   
   and unsettled question: how AI might intersect with database systems.   
      
   The roundtable, chaired by tech royalty Esther Dyson, brought together three   
   sharply different perspectives. Tom Kehler of Intellicorp represented the   
   expert systems movement (the 1980s equivalent of today's Generative AI hype).   
   John Landry of Cullinet focused on applying AI techniques to enterprise   
   applications, and Larry Ellison, president and CEO of Oracle, took a view that   
   was already contrarian even by the standards of the day.   
      
   What makes the discussion notable in hindsight is not the optimism around AI,   
   which was common at the time, but Ellison's repeated insistence on limits.   
   While others described AI as a new architectural layer or even a "new species"   
   of software, Ellison argued that intelligence should be applied sparingly,   
   embedded deeply, and never treated as a universal solution.   
      
   AI merely a tool   
      
   "Our primary interest at Oracle is applying expert system technology to the   
   needs of our own customer base," Ellison said. "We are a data base   
   management system company, and our users are primary systems developers,   
   programmers, systems analysts, and MIS directors."   
      
   That framing set the tone for everything that followed. Ellison was not   
   interested in AI as an end-user novelty or as a standalone category. He saw it   
   as an internal tool, one that should improve how systems are built rather than   
   redefine what systems are.   
      
   Many vendors treated expert systems as a way to replicate human decision making   
   wholesale. Kehler described systems that encoded experience and judgment to   
   handle complex tasks such as underwriting or custom order processing.   
      
   Landry went further, arguing that AI could form the architecture for an   
   entirely new generation of applications, built as collections of cooperating   
   expert systems.   
      
   Ellison pushed back at this notion, prompting moderator Esther Dyson to ask:   
   "Your vision of AI doesn't seem to be quite the same as Tom Kehler's, even   
   though you have this supposed complementary relationship. He differentiates   
   between the AI application and the data base application, whereas you see AI   
   merely as a tool for building data bases and applications."   
      
   "Many expert systems are used to automate decision making," Ellison   
   replied. "But a systems analyst is an expert, too. If you partially automate   
   his function, that's another form of expert system."   
      
   Ellison drew a clear line between processes that genuinely require judgment and   
   those that don't. In doing so, he rejected what might now be called AI   
   maximalism.  "In fact, not all application users are experts or even   
   specialists," he said. "For example, an order processing application may   
   have dozens of clerks who process simple orders. Instead of the order   
   processing example, think about checking account processing. Now, there are no   
   Christmas specials on that. There are no special prices. Instead, performance   
   is all-critical, and recovery is all-critical."   
      
   "The height of nonsense"   
      
   When Dyson suggested a rule such as automatically transferring funds if an   
   account balance dropped below a threshold, Ellison was blunt.  "That can be   
   performed algorithmically because it's unchanging," he said. "The   
   application won't change, and to build it as an expert system, I think, is   
   the height of nonsense."   
      
   This was a striking statement in 1987, when expert systems were widely promoted   
   as the future of enterprise software. Ellison went further, issuing a warning   
   that sounds surprisingly modern.  "And so I say that a whole generation is   
   going to be built on nothing but expert systems technology is a misuse of   
   expert systems. I think expert systems should be selectively employed. It is   
   human expertise done artificially by computers, and everything we do requires   
   expertise."   
      
   Rather than applying AI everywhere, Ellison wanted to focus it where it changed   
   the economics or usability of system development itself. That led him to what   
   he called fifth-generation tools, not as programming languages, but as   
   higher-level systems that eliminated procedural complexity.  "We see enormous   
   benefits in providing fifth-generation tools," he said. "I don't want to   
   use the word `languages,' because they really aren't programming   
   languages anymore. They are more."   
      
   He described an interactive, declarative approach to building applications, one   
   where intent replaced instruction.  "I can sit down next to you, and you can   
   tell me what your requirements are, and rather than me documenting your   
   requirements, I'll sit and build a system while we're talking together, and   
   you can look over my shoulder and say, `No, that's not what I meant,' and   
   change things."   
      
   The promise was not just speed, but a change in who controlled software.  "So   
   not only is it a productivity change, a quantitative change, it's also a   
   qualitative change in the way you approach the problem."   
      
   Not anti-AI   
      
   That philosophy carried through Oracle's later product strategy, from early   
   CASE tools to its eventual embrace of web-based architectures. A decade later,   
   Ellison would argue just as forcefully that application logic belonged on   
   servers, not on PCs.   
      
   "We're so convinced that having the application and data on the server is   
   better, even if you've got a PC," he told Computerworld in 1997. "We   
   believe there will be almost no demand for client/server as soon as this comes   
   out."   
      
   By 2000, he was even more forthright.  "People are taking their apps off PCs   
   and putting them on servers," ZDNET reported Ellison as saying. "The only   
   things left on PCs are Office and games."   
      
   In retrospect, Ellison's predictions were often early and sometimes   
   overstated. Thin clients did not replace PCs, and expert systems did not   
   transform enterprise software overnight. Yet the direction he described proved   
   durable.   
      
   Application logic moved to servers, browsers became the dominant interface, and   
   declarative tooling became a core design goal across the industry.   
      
   What the 1987 roundtable captures is the philosophical foundation of that   
   shift. While others debated how much intelligence to add to applications,   
   Ellison questioned where intelligence belonged at all.   
      
   He treated AI not as a destination, but as an implementation detail, valuable   
   only when it reduced complexity or improved leverage.   
      
   As AI once again dominates enterprise strategy discussions, the caution   
   embedded in Ellison's early comments feels newly relevant.   
      
   His core argument was not anti-AI, but anti-abstraction for its own sake.   
   Intelligence mattered, but only when it served a larger architectural goal.   
      
   In 1987, that goal was making databases the center of application development.   
   Decades later, the same instinct underpins modern cloud platforms. The   
   technology has changed, but the tension Ellison identified remains unresolved:   
   how much intelligence systems need, and how much complexity users are willing   
   to tolerate to get it.   
      
      
   https://www.techradar.com/pro/the-height-of-nonsense-oracle-co-founder-larry-el   
   lisons-1987-argument-that-not-everything-should-be-ai-makes-perfect-sense-in-20   
   26   
      
   $$   
   --- SBBSecho 3.28-Linux   
    * Origin: Capitol City Online (1:2320/105)   
   SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700   
   SEEN-BY: 226/30 227/114 229/110 134 206 300 307 317 400 426 428 470   
   SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45   
   SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35   
   PATH: 2320/105 229/426   
      

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca