This is less of a post and more of a permanent draft. Iâm publishing this to force myself to learn more about AI, understand it better, and have a more thoughtful and nuanced view of the good and the bad.
This is a post about artificial intelligence (AI). I know every Tom, Dick, and Harry has a view on AIâwhich got me thinking: why shouldnât I have one? I am on the internet; I have a blog, which means itâs practically a crime against humanity not to talk confidently about things you have no fucking idea about.
With that said, this post was inspired by an interview with Jensen Huang, the CEO of NVIDIA, that I partially heard several weeks ago. I was going to the office in a friendâs car, and this podcast was playing. I wasnât really interested in listening to it, so I was reading something else on my phone. I got distracted at a point, and this part of the interview was playing:
Cleo Abrahams: So if someone is watching this and came in knowing that NVIDIA is an incredibly important companyâbut not fully understanding why or how it might affect their lifeâand theyâre now (hopefully) seeing this big shift weâve gone through in computing over the last few decades, with so many exciting and sometimes strange possibilities ahead⊠If they want to look into the future a bit, how would you advise them to prepare or think about this moment? How are these tools actually going to affect their lives?
Jensen Huang: There are several ways to think about the future weâre creating. One way is to imagine that the work you do remains important, but the effort required to complete it shrinks dramaticallyâgoing from taking a week to being nearly instantaneous. If the drudgery of work is eliminated, what are the implications?
This is similar to what happened when highways were introduced during the Industrial Revolution. Suddenly, with interstate highways, suburbs began to form. The distribution of goods across the country became much easier. Gas stations, fast-food restaurants, and motels started popping up along highways because people were now traveling long distances and needed places to refuel, eat, and rest. Entire new economies and industries emerged.
Now, imagine if video conferencing allowed us to see each other without ever needing to travel. It would fundamentally change how and where people live and work. Suddenly, working far from the office would become more viable.
Consider another scenario: what if you had a software programmer with you at all timesâone who could instantly build whatever you dreamed up? What if you could take just the seed of an idea, sketch it out, and instantly see a working prototype? How would that change your life? What new opportunities would it unlock? What would it free you to do?
And now, how does this affect jobs?
Over the next decade, artificial intelligenceâwhile not universally capableâwill reach superhuman levels in certain areas. I can tell you what that feels like because I work alongside some of the most intelligent people in the world. They are the best at what they do, far better than I could ever be in their respective fields. And yet, that doesnât make me feel obsolete. Instead, it empowers me. It gives me the confidence to tackle even more ambitious challenges.
So, what happens when everyone has access to super-intelligent AIs that excel at specific tasks? It wonât make us feel irrelevantâit will empower us. It will give us confidence. If youâve ever used AI tools like ChatGPT, youâve probably already experienced this. Learning new things feels easier. The barriers to understanding complex subjects are lower. Itâs like having a personal tutor with you at all times.
Thatâs the future I seeâone where this feeling of empowerment becomes universal. If thereâs one piece of advice Iâd give, itâs this: get yourself an AI tutor. Use it to learn, to program, to write, to analyze, to think, and to reason. AI will make all of us feel more capable.
We are not becoming superhuman because we ourselves are changingâwe are becoming superhuman because we have super AI.
As we reached the office, my friend said something about AI tools changing everything, and I casually said, âWe no longer have an excuse to be stupid.â I didnât think much before blurting it out, but given how wisdomous I am, I casually drop bangers like street dogs dropping deuces whenever they have the urge to unburden themselves. I later got to thinking about what I said, and I remembered what Jensen said, and I agree with him to a large degree.
Before I sketch out my view of these AI tools, I think a few clarifications are necessary to ground my perspective. In my experience, no two people mean the same thing when they say artificial intelligence (AI).
For some, AI means tools like ChatGPT, and for others, it means magicâsentient agents that can do anything and everything much better than humans can. That means everything from knowing everything to solving space travel. For the purposes of this post, AI means the technologies that underpin tools like ChatGPT and Claude.
AI is also a divisive topic, and the views on it span the full spectrum of optimism and pessimism. For the optimists, the current transformer-based large language models are just the beginning, and weâll have general intelligence and superintelligence soon. I donât know what they mean by that, but I think of the movies I, Robot and Transcendence.
Let me share just a couple of views on AI to highlight the extremes of opinions. On the optimistic end, the CEOs of the giant AI companies think that what they are building can cure our worst diseases, speed up economic progress, and fix democracy.
On the pessimistic end, some think these LLMs are just dumb stochastic parrots vomiting the collective median mediocrity of humans. Others think AI companies are a giant con fanning the flames of a bonfire of capital that has consumed hundreds of billions with no use casesâand will go bust.
My own heuristic for thinking about AI is to assume disruption by default. I donât know if these technologies will continue advancing at the same rate they are. I donât know if weâll have superintelligence, general intelligence, or whatever ambiguous fucking terms the nerds are making up.
I donât think humans will be reduced to participants in The Hunger Games with murderous superintelligent robots or reliving the final scene of The Matrix. Instead, I am operating under the assumption that AI will continue progressing, and there is bound to be disruption.
Whatâs the basis of my assumption?
Simple linear extrapolation of the current trends. Yes, I know itâs dumb, but itâs not like the people writing those fancy fucking 20,000-word-long blog posts on AI are doing anything different. Iâm just honest enough to admit that I have a very dumb model of AI. I got no fancy theories about AI progress to peddle.
What I want to do in this post is to sketch out a skeleton of my current opinion about these tools. Iâve been thinking about them for a while now, and I am not sure how certain I am about my current view and understanding. That being said, I want to use this post as a beachhead to sketch out an outline and continue thinking deeply. This is a permanent draft rather than a fully fleshed-out view.
Overview
One of the triggers for writing this post is when I heard Jensen Huang say that with AI tools, you have a tutor in your pocket:
If youâve ever used AI tools like ChatGPT, youâve probably already experienced this. Learning new things feels easier. The barriers to understanding complex subjects are lower. Itâs like having a personal tutor with you at all times.
This isnât a new observation, of course, and plenty of other people have said itâand Iâve used the phrase casually multiple times. It hit me this time because I had started using these AI tools to teach myself new things. Until very recently, I hadnât relied on them for learning because of both the hallucination problem and a sense of supremacy in my own Googling skills. In hindsight, I think I was being dogmatic.
Of course these tools hallucinate, but then so do humans, experts included. As far as criticisms of AI tools go, âthat they hallucinateâ strikes me as a lazy observation. In my experience, these tools can be phenomenal learning companions.
If you want to quickly get an understanding or get a sense of the breadth and depth of a concept for you to dive deeper, these tools can help immensely. In fact, some scientists seem to think that thereâs a bright side to hallucinations:
Now, A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. Itâs the scientific method â only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers.
Let me give you a tangible example. I work in finance, and I love learning about financial history. One rabbit hole Iâve always wanted to go down was the Japanese bubble of the 1980sâ90s, but I never got around to it. One day, out of the blue, I decided to ask ChatGPT, Gemini, and Claude about it in between my gym setsâIâm âoptimizing every single idle moment,â baby!
I was pleasantly surprised at how good these tools were at giving me a basic grounding of what happened, outlining different schools of thought on the triggers for the crisis, and so on. It goes without saying that thereâs a risk in relying only on the things these tools teach you:
I donât always fully trust what comes out of thisâitâs just a probabilistic, statistical recollection of the internet. â Andrej Karpathy
Also, I prefer asking different models the same questions. While the answers are broadly similarâduh! Theyâre all trained on the same stuffâthere are often subtle differences that are quite helpful. It also goes without saying that relying solely on these tools for learning is a terrible idea. Once I read whatever these LLMs told me about the Japanese bubble, I started Googling to find the most cited papers on the topic.
The very first highly cited paper I read pretty much said the exact opposite of whatever these tools told me. Despite that, having a fuzzy and very broad overview of the topic helped contextualize what I was reading.
Between asking these LLMs and my own Googling, I had a wide menu of rabbit holes to go down, and that was quite helpful. This is an example of both the possibilities and limitations of AI tools. They are one more instrument in your learning arsenal, but they should never be the only oneâat least for the foreseeable future.
There are also times when we all come across complicated concepts that most of us wouldnât have read about. This happens a lot to me in finance, physics, and philosophy. Yes, we can Google and read Wikipedia entries, but thereâs a natural limit to our comprehension because itâs a one-way interaction. However, with these tools, you can have a back-and-forth conversation until you reach a reasonable understanding of a particular concept.
This reminded me of this passage from Andy Masleyâs brilliant post on learning:
A few years ago I decided to try to learn a lot more about China. I started with a pretty simple book called Wealth and Power that gave simple biographical sketches of important people in modern Chinese history. Having major figures in Chinese history in my head as simple characters was actually pretty useful, even though it drastically simplified reality. The actual history is incredibly complicated, but having a basic story I could make more complicated over time made it much easier to learn than diving into more nuanced reading at the start. Donât be afraid of starting off with an incredibly simplistic narrative of the world. You just need to add complexity to it later.
This, I think, perfectly captures one utility of LLMs.
Iâve also heard from my colleagues that LLMs are not only good at helping you understand basics but are also quite adept at helping you delve into topics in depthâthough Iâve never tried going too deep myself. With LLMs, you can easily overcome the initial hurdle of grasping a new idea or concept.
They are remarkably good at helping you build a fuzzy map of the potential depth and breadth of a topic. In other words, they help lift the fog of your understanding to a great degree, which is phenomenally helpful in learning new things. Again, a trite observation is that LLMs are like having a person who knows a little about a lot of things in your pocket. Thereâs some truth to that, but I would add that these tools know a lot about a lot.
Another feature of these tools that Iâve found useful is their audio capability. When reading books, I often have ChatGPTâs voice mode on standby. If I have difficulty understanding a concept in a book or even the simple meaning of a word, I just ask it and engage in a two-way conversation. Itâs stunning how good the speech capabilities are.
ButâŠ
Yes, thereâs always a but.
The debates about AI tools run the gamutâfrom âthey can help me fix typos in my blog postâ to âthey are just probabilistic statistical bullshittersâ to âthey will end civilization.â Regardless of your current view, I highly recommend experimenting with them before drawing any conclusions.
Jumping to conclusions that these tools are bad without trying them means you are being intellectually dishonest and lazy. Even worse, you are just recycling other peopleâs opinions and passing them off as your own.
If you donât use these tools, you are not only missing out on some useful instruments that can enhance your capabilities and save time, but youâre also foregoing the chance to understand what may be the defining technological advancement of our timesâin both its good and bad aspects.
Let me give you an example of what it means to use AI tools. Tyler Cowen is probably one of the smartest humans alive. Social scientist Kevin Munger called him an âinformation monster,â and thatâs a perfect description of the man. He probably reads more in a week than most of us do in a year. In a fascinating interview with David Perell, he talked about how he uses AI:
David Perell: Tell me about AI in the classroom. How are you using it, and what are your students struggling to understand?
Tyler Cowen: In my PhD class, which I mentioned earlier, thereâs no assigned textbook. That saves students some money, but theyâre required to subscribe to one of the better AI services, which does cost somethingâthough much less than a textbook would.
The main grade is based on a paper. They have to write a paper using AI in some fashion, and theyâre required to report how they used it. But beyond that, my only instruction is: Make the paper as good as possible. How much of it is theirs? From my perspective, all of itâjust as when you write with pen and paper or use word processing, thatâs still your work.
I also want them to document what they did because I want to learn from them. Iâve done this beforeâin a law class last year, I had students write three papers. One of the three had to involve AI, while the other two were done without it. That was a less radical approach, but it worked well, and students felt they learned a lot.
Other classes might tell them that using AI is cheating, but we all know thereâs an inevitable equilibrium where people need to be able to use these toolsâespecially as lawyers, but really in most walks of life. So why not teach it now?
Iâve used these tools while reading old and challenging books as well, and I have to say, they are remarkably helpful. If you use them in the right wayâand if you can avoid being distracted by having your phone during readingâLLMs can be wonderful reading companions. Your reading experience becomes richer, and you learn a lot more as a result.
AI is a hot-button topic, and people have strong opinions. The issue is that many are adopting lazy tropes about AI being either good or bad without critically examining it. In doing so, people forgo experimentation and the discovery of novel use cases that could add value to their lives.
This is not to say there are no downsides to using large language models. There are real risks of being overreliant on LLMs and losing oneâs ability to think and act independently. However, arriving at that conclusion requires a lot of thinking, experimentation, understanding the trade-offs, and then making an informed choice.
Of course, thereâs always the option of sticking your head in the sand and pretending that these are just dumb, stochastic chatbotsâa remarkably convincing simulacrum of the humans that use them. As far as useful delusions go, this seems to fall somewhere on the usefulness spectrum, but that is deceptive.
Things we lost in the fire
Having said all this, thereâs always good and bad in everything, and the trick is to find middle ground. Much of the angst and criticism about AI boils down to asking the questions, âWhat are we losing by using these tools?â and âWhat price are we paying?â Many thoughtful people worry that weâre outsourcing our thinking to LLMs.
This basic encapsulation of the debate suggests that relying on machines may cost us a part of ourselves. While my own summary is simple, it raises deep questions: âWhat does outsourcing our thinking to machines mean?â and âWhat exactly are we outsourcing?â
At the very least, this question of delegation touches on the issues of meaning, responsibility, effort, our sense of self, and the stories we tell ourselves about ourselves.
Hereâs a far more thoughtful and intelligent articulation of this worry:
Marshall McLuhan saw art as âexact information of how to rearrange oneâs psyche in order to anticipate the next blow from our own extended faculties.â If LLMs are an extended faculty then what is the blow that we are anticipating? One of the answers I think is lose of a sense of self. Every time I enjoy utilizing an LLM for research or experiments in writing, I become hyperaware of the convoluted ways in which Iâve constructed an identity around being âsmartâ or mere knowledge of certain things - with LLMs that disappear. What other blows to our senses could we anticipate through creation of art and procedures or art?
Eric Hoel wrote a brilliant article about this recently:
But if thatâs the clear upside, the downside is just as clear. As the Microsoft researchers themselves sayâŠ
While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.
This negative effect scaled with the workerâs trust in AI: the more they blindly trusted AI results, the more outsourcing of critical thinking they suffered. Thatâs bad news, especially if these systems ever do permanently solve their hallucination problem, since many users will be shifted into the âhigh trustâ category by dint of sheer competence.
The study isnât alone. Thereâs increasing evidence for the detrimental effects of cognitive offloading, like that creativity gets hindered when thereâs reliance on AI usage, and that over-reliance on AI is greatest when outputs are difficult to evaluate. Humans are even willing to offload to AI the decision to kill, at least in mock studies on simulated drone warfare decisions. And again, it was participants less confident in their own judgments, and more trusting of the AI when it disagreed with them, who got brain drained the most.
The ever-thoughtful LM Sacass talks about what it means to rely on the machines, drawing on the work of Alice Mumford:
But all of this, patient reader, is prelude to sharing the line to which Iâve been alluding.
It is this: âLife cannot be delegated.â
Simply stated. Decisive. Memorable.
Hereâs a bit more of the immediate context:
âWhat I wish to do is to persuade those who are concerned with maintaining democratic institutions to see that their constructive efforts must include technology itself. There, too, we must return to the human center. We must challenge this authoritarian system that has given to an under-dimensioned ideology and technology the authority that belongs to the human personality. I repeat: life cannot be delegated.â
I say it is simply stated, but it also invites clarifying questions. Chief among them might be âWhat exactly is meant by âlifeâ?â Or, âWhy exactly can it not be delegated?â And, âWhat counts as delegation anyway?â
This is a genuine worry. Letâs take the case of learning something new. In the absence of LLMs, we would muddle our way through by Googling, reading random books, talking to people, and so on. We would have invested a significant amount of time and effortâas critics of AI tools point out; this is what made us human.
To learn is to endure both the pain and discomfort and to reap the rewards. Now, however, a chat message away, the question is: are we short-circuiting the learning processâand more importantly, whatâs the cost of having all this knowledge so easily available? Another element is that the price of the delusion that âI know somethingâ is now almost free.
This quality of ânot stopping at an unsatisfactory answerâ deserves some examination.
One component of it is energy: thinking hard takes effort, and itâs much easier to just stop at an answer that seems to make sense, than to pursue everything that you donât quite get down an endless, and rapidly proliferating, series of rabbit holes.
Itâs also so easy to think that you understand something, when you actually donât. So even figuring out whether you understand something or not requires you to attack the thing from multiple angles and test your own understanding.
This requires a lot of intrinsic motivation, because itâs so hard; so most people simply donât do it.
The Nobel Prize winner William Shockley was fond of talking about âthe will to thinkâ:
Motivation is at least as important as method for the serious thinker, Shockley believedâŠthe essential element for successful work in any field was âthe will to thinkâ. This was a phrase he learned from the nuclear physicist Enrico Fermi and never forgot. âIn these four words,â Shockley wrote later, â[Fermi] distilled the essence of a very significant insight: A competent thinker will be reluctant to commit himself to the effort that tedious and precise thinking demands â he will lack âthe will to thinkâ â unless he has the conviction that something worthwhile will be done with the results of his efforts.â The discipline of competent thinking is important throughout life⊠(source)
But itâs not just energy. You have to be able to motivate yourself to spend large quantities of energy on a problem, which means on some level that not understanding something â or having a bug in your thinking â bothers you a lot. You have the drive, the will to know.
The same applies to writing. To write is to think, and to think is to learn. Even before you write a single word, an immense amount of reading and thinking is required. By writing, we essentially clarify our thoughts and puncture the delusions that we truly understand something. The romantic ideal of writing involves staring at a blank screen in frustration while engaging in a mental Kung Fu fight with that inner impostor who keeps mocking you for being a hack.
To write a sentence is to contemplate and question all the assumptions that go into it. This is hard and frustratingâbut thatâs what makes it worthwhile. Now, a post or an essay is merely a prompt away in a sterile chat window that can write whatever you want, however you want. The question is: by asking ChatGPT to do your thinking and writing, are you losing out on the fruits of frustration? Worse yet, are we signing up to atrophy our thinking?
I write primarily to learn and explain things to myself. But now AI is good enough that most days the things I write end up as a large prompt for multiple conversations, and donât end up becoming essays.
I donât know what to make of this. â Rohit Krishnan
This is a worry Iâve been grappling with. I can write decently, but Iâm horrendous at proofreading and editing. So I write my posts and use these LLMs for spotting typos and other grammatical mistakes. The question I keep asking myself is: am I losing out on something? I picked reading and writing because these are the two areas where Iâve seen people use AI tools the most. The same worries extend to coding:
I recently realized that thereâs a whole generation of new programmers who donât even know what StackOverflow is.
Back when âClaudeâ was not a chatbot but the man who invented the field of information entropy, there was a different way to debug programming problems.
First, search on Google. Then, hope some desperate soul had posed a similar question as you had. If they did, youâd find a detailed, thoughtful, (and often patronizing) answer from a wise greybeard on this site called âStack Overflowâ.
Professor Alan Noble on students using LLMs in class:
For teachers, the anxiety comes from the fact that it isnât that difficult to take most freshman composition prompts, input them into ChatGPT and get a decent B paper, or at least a few solid paragraphs you can shape into a B paper. And it wonât be long before it can produce A papers. Thatâs not just true for composition classes. Itâs a problem for history, political science, philosophy, theology, etc. Anytime you ask a student to write something, AI can produce a facsimile. And these facsimiles are not always easy to detect. Teachers who take the time to get to know each studentâs style through in-class writing can often detect significant changes in tone and style created by the use of AI. But I want you to consider how much time and attention and memory that involves. How many writing samples can you memorize? 20? 30? 50? 100? And what if the student matures in their writing or takes their work to a writing center to get help? What you think might be AI could just be growth! You might think that teachers could use fix this problem with the use of more AI, but unfortunately, AI âdetectorsâ can be wrong, leading to serious consequences, so thatâs not a viable option.
In an article, philosopher and teacher Troy Jollimore describes the real-world implications of students relying on LLMs to write and read through the lens of a health ethics class he teaches. Reading it made me shudder.
The ethical conundrums that health care workers encounter donât arrive neatly packaged like an essay prompt. It wonât be anything you can google. Even if there were time to feed the dilemma into AI, the AI wonât help you figure out just which questions need to be asked and answered. And in real life, this is often the most difficult part.
A person who has not absorbed, and genuinely internalized, a fair bit of ethical insight by the time they find themselves in such situations is highly unlikely to respond well, or even minimally adequately. One needs to be able to see a real-life situation and grasp what is central and essential in it, to imagine the various possible outcomes and to see who and what is at riskâthreats and dangers often hidden under the surface of what we observe.
And there is a great deal at stake. Reacting badly to a complicated moral situation that you were not prepared for can haunt you for a long time. It is no exaggeration to say that in the health professions, such situations can be, in the most literal sense, matters of life and death.
I donât pretend to be able, in the course of a semester, to fully prepare students to face such situations. No one, after all, is ever as prepared as they could possibly be. But I can get them closer to being preparedâassuming that they commit and do the work. I can point out and help them explore examples of good ethical thinking while offering them a framework of theories and concepts to help make sense of complicated situations. In these ways, they can get better at recognizing the most pertinent considerations, and so better at framing the issues in more nuanced waysâways more likely to lead to a good outcome, to a decision they can live with.
You can find any number of such constructive perspectives on what we are losing by relying on AI across countless other disciplines.
Hereâs Mark Humphries, who teaches history:
While Iâve heard lots of counter-arguments over the last couple yearsâranging from the ethical to the aestheticâI have yet to hear one that convincingly parries the core issue: our work has value because it is time consuming and our expertise is specific and comparatively rare. As LLMs challenge that basic equation, things will inevitably change for us just as they have for any economic group faced with automation throughout history. To be clear, I donât like this either and I have real moral, ethical, and methodological concerns about machine generated histories.
This is why we need to stop talking about LLMs in the abstract and start having serious conversations about the future of our discipline. My intuition is still that humans are going to need to be in the loop and that people will prefer human generated histories to machine made ones, but am I right? Even if I am, what is history going to look like in this brave new world? Are we willing to harness these tools and work alongside them? What exactly do we bring to the table that LLMs do not? If we keep avoiding these hard conversations, the rest of the world may move on without us. â Is this the Last Generation of Historians
Political scientist and professor Paul Musgrave:
Is this fancy autocomplete? Maybe. But do you know what the best definition of me in office hours is, most of the time? Basically the same thing. (âRealism is focused on ___â).
Itâs a hop, skip, and not even a jump from this level of copying-pasting to full-on integration with statistical analysis software. For the typical levels of analysis you might expect an organization to need to do (imagine a political campaign regressing vote share and personal contacts, or a charity seeking to increase donor amounts from solicitations), this is really just about good enough. This isnât Clippy.
I can cope by telling myself that Iâm an expert and that I knew what questions to askâcrucially, what diagnostics to runâbut I can also concede that my analysis of this dataset without Claude would not have been much better and would have taken a lot longer to compile.
You can find plenty of other positive perspectives as well.
From the same Tyler Cowen interview:
David Perell: Tell me about AI in the classroom. How are you using it, and what are your students struggling to understand?
Tyler Cowen: In my PhD class, which I mentioned earlier, thereâs no assigned textbook. That saves students some money, but theyâre required to subscribe to one of the better AI services, which does cost somethingâthough much less than a textbook would.
The main grade is based on a paper. They have to write a paper using AI in some fashion, and theyâre required to report how they used it. But beyond that, my only instruction is: Make the paper as good as possible. How much of it is theirs? From my perspective, all of itâjust as when you write with pen and paper or use word processing, thatâs still your work.
I also want them to document what they did because I want to learn from them. Iâve done this beforeâin a law class last year, I had students write three papers. One of the three had to involve AI, while the other two were done without it. That was a less radical approach, but it worked well, and students felt they learned a lot.
Other classes might tell them that using AI is cheating, but we all know thereâs an inevitable equilibrium where people need to be able to use these toolsâespecially as lawyers, but really in most walks of life. So why not teach it now?
Tyler Cowen: No one has taught them. Every year, in both my law and economics classes, I ask my students: Has anyone else been teaching you how to use AI? Silence.
To me, thatâs a scandal. Academia should be at the forefront of this. In fact, the students who are cheatingâthe ones using AI in secretâoften know way more than their professors. Now, I donât condone cheating when itâs against the rules, but I think that entire norm needs to shiftâhonestly, it needs to collapse.
Homework has to change. We need more oral exams, more proctored in-person exams, and other new approaches. This isnât something to put offâwe need to adapt now.
Historian and teacher Benjamin Breen has written extensively about using LLMs in the classroom to simulate historical games:
But what about in the classroom, where the stakes are lower and confusion can be productive? Discussions of LLMs in education so far have tended to fixate on students using ChatGPT to cheat on assignments. Yet in my own experience, Iâve found them to be amazingly fertile tools for self-directed learning. GPT-4 has helped me learn colloquial expressions in Farsi, write basic Python scripts, game out possible questions for oral history interviews, translate and summarize historical sources, and develop dozens of different personae for my world history students to use for an in-class activity where they role-played as ancient Roman townsfolk.
And although all this might sound somewhat niche, keep in mind that there are thousands of classrooms where this sort of thing is happening. Education is a huge and impactful field, one that accounts for 6% of the total GDP of the United States and which represents one of the largest employment sectors globally.
Given this, I find it somewhat surprising that media attention so often focuses on the hypothetical uses of AI systems. Sure, future LLMs might develop novel cancer-fighting drugs or automate away millions of jobs. But thatâs by no means assured. What is definitely assured is that generative AI has already become interwoven with secondary and postsecondary education. According to one study, over 50% of American university students are using LLMs. That percentage will likely continue growing fast â especially given the rollout, just yesterday, of OpenAIâs free âGPT-4oâ model, which is a major advance over the previous free LLMs that were available.
Using LLMs to transcribe and understand old texts:
Will that change in the next year or two? Clearly, many people in the AI field and adjacent to it think so. But I favor the possibility that there is an inherent upper limit in what these models can do once they approach the âPhD level,â if we want to call it that. In fact, thatâs exactly why Iâm writing a book about William James and the machine age: because James, more than anyone, I think, recognized both the irreducibility of even scientific knowledge to a single set of data or axioms, and also understood that consciousness is rooted not just in abstract reasoning but in a physicality, a sensation of being a person in the world.
I donât discount the possibility that future AI models can be better historians than any human now living â but I think thatâs a multi-decade prospect, and one that will probably require advances in robotics to provide sensory, emotional, and social data to go alongside the textual and visual datasets we currently feed these models on.
All that said â yes, these things can definitely âdoâ historical research and analysis now, and I am 100% certain that they will improve many aspects of the work historians do to understand the past, especially in the realms of transcription, translation, and image analysis. I find that pretty exciting.
All these perspectives show how experts across disciplines are wrestling with hard questions about what it means to be human, to learn, and to read and write in the ever-growing shadow of artificial intelligence. At the very least, it should be abundantly clear that there are no easy or one-size-fits-all answers. Whether these tools are good or badâand questions about human agencyâare deeply context-dependent.
Whatâs clear to me is that itâs phenomenally stupid to jump to conclusions without using these tools and thinking about the trade-offs. The most valuable perspective might come not from categorical rejection or uncritical embrace, but from thoughtful experimentation and an honest assessment of what we gain and what we might lose.
What do you think?