In talking about the impact of artificial intelligence, astrophysicist Neil deGrasse Tyson makes the point that "schools value grades more than students value learning." He said this in the context of talking about the use of AI in schools these days, and cheating. For example, one might use GPT to write an essay as a student in the classroom. This is an example of using AI that I brought up in part 1 of this conversation.
What Tyson is really saying, moreover, is that AI will force institutions to change their value system. This is a blindingly simple but important point. As it is, institutions often value the ability to produce rote knowledge. If a student can write an essay about something that we have already written about a million times over, and they can write it to a degree that is acceptable to their teacher, then they will get a good grade from the teacher. Now, enter AI.
If AI removes the value of writing these rote essays about things that a machine can write, what then happens to the institution of learning? It's a simple but important question, because it clearly implies a value shift from one kind of learning to another. From one kind of thinking to another.
We could apply the same logic to running a business. If people are being evaluated on doing things that can be done by a machine, organizational values will have to change. They will have to change to something that can't be solved for them by AI. In short, what matters now is the ability to think about and solve problems that haven't already been solved.
AI excels at solving problems that to a large degree have already been solved. But it won't be able to solve new problems because it won't have access to the information needed to solve those problems.
Of course, Tyson may only be thinking about the next phase of AI, but for the time being that's what most of us have to think about.
~ Demian Entrekin | Head of Product
You’re right, Demian—these latest developments in AI are forcing a wholesale change in thinking across these broad domains and institutions of education, culture, and yes, business and industry. (And soon enough, the same will need to happen in politics and government; some of our most thoughtful scientists and AI philosophers have been nervously muttering about the need for a 6-month pause in training AI systems more powerful than GPT-4, given the apparently non-zero probability of existential doom for humanity otherwise.)
I’m saying these folks are ‘muttering,’ because so far that’s how it’s coming across in society at large—the mainstream media isn’t nearly as focused on highlighting this issue as they should be. Anyhow, it’s breathtaking to consider how rapidly this kind of re-think is being compelled, and quite literally overnight—with ChatGPT’s astonishing uptake having taken place in just a matter of months.
To be sure, the improvement in AI itself didn’t happen overnight; the key development of the transformer machine learning model occurred in 2017, when it was introduced by a team of AI researchers at Google Brain, who so brain-ticklingly titled their technical paper about it, “Attention is All You Need.” I’m pretty sure that title will go down in academia as legendary, soon enough; it’s zestfully different from the usual boring, obscure titles in such academic studies!
Our business has certainly not been immune to all this. We’re already several decades into technology’s inroads in the investment banking industry, with things like pricing optimization and algorithmic trade execution having become the norm in Wall Street trading.The old image of a crowded trading floor with harried traders jostling each other and brandishing (wired!) phones is long gone; there’s hardly anyone there now, as everybody is glued to their computer monitors elsewhere. Including, of course, even us casual-trader normies.
AI has the potential to supercharge even this relatively advanced stage of tech in the finance industry. Germane to Finalis’ niche in it, can you imagine the affordances AND efficiencies AI might be able to deliver as far as regulatory compliance and risk management are concerned? While our expert humans-in-the-loop within our Work Flow, Deal Flow and Funds Flow product lines will still be required to oversee these functions, it’s clear that the time-saving potential for them is enormous. And as such, if we substantially scale upwards our customer base of investment bankers on the platform—we won’t need to correspondingly increase the number of human agents servicing their needs, but the quality of decisions and judgments will remain at the same high level, or indeed even improve.
It is precisely our corps of human agents to which I think you’re obliquely referring, when you talk about an AI model not having access to the information it needs to solve problems. These folks still have the advantage of the experiential modality that AI by definition does not have—long years of human relationships in the business, and the hard-earned knowledge, insight and sagacity that all confers.
Right now, everyone and their aunt is integrating AI, in the form of ChatGPT (and other) chatbots, into their products. Here at Finalis, our 3rd party vendors—Intercom, Hubspot, Atlassian, Github, among others—are already touting this. Just the other day, I was startled to encounter a new dropdown menu while on a Hubspot Knowledge Base article: double-clicking to select a paragraph, a menu appeared suggesting that I could rewrite, expand, summarize or even change the tone of the text. It had NOT been there just the day before!
Well, this response has been lengthy… but you did get me started on it, with your usual perspicacious take on things. Let’s continue all this on a Part 3, and beyond.
~ Lloyd Nebres | Communications Manager