Skip to main content
wordpress supportwordpress support services

#81 – James Dominy on Why AI Is to Be Embraced, Not Feared

Transcript
[00:00:00] Nathan Wrigley: Welcome to the Jukebox podcast from WP Tavern. My name is Nathan Wrigley. Jukebox is a podcast, which is dedicated to all things WordPress. The people, the events, the plugins, the blocks, the themes, and in this case how AI and WordPress can work together.

If you’d like to subscribe to the podcast, you can do that by searching for. WP Tavern in your podcast player of choice. Or by going to WPTavern.com forward slash feed forward slash podcast. And you can copy that URL into most podcast players.

If you have a topic that you’d like us to feature on the podcast, I’m keen to hear from you. And hopefully get you or your idea featured on the show. Head to WPTavern.com forward slash contact forward slash jukebox, and use the form there.

So on the podcast today, we have James Dominy. James is a computer scientist with a master’s degree in bioinformatics. He lives in Ireland working at the WP engine Limerick office.

This is the second podcast recorded at WordCamp Europe, 2023 in Athens. James gave a talk at the event about the influence of AI on the WordPress community and how it’s going to disrupt so many of the roles which WordPressers currently occupy.

We talk about the recent rise of ChatGPT, and the fact that it’s made AI available to almost anyone. In less than 12 months, many of us have gone from never touching AI technologies to using them on a daily basis to speed up some aspect of our work.

The discussion moves on to the rate at which AI systems might evolve, and whether or not they’re truly intelligent or just a suite of technologies which masquerade is intelligent. Are they merely good at predicting the next word or phrase in any given sentence? Is there a scenario in which we can expect our machines to stop simply regurgitating texts and images based upon what they’ve consumed; a future in which they can set their own agendas and learn based upon their own goals?

This gets into the subject of whether or not AI is in any meaningful way innately intelligent, or just good at making us think that it is, and whether or not the famous Turing test is a worthwhile measure of the abilities of an AI.

James’ his background in biochemistry comes in handy as we turn our attention to whether or not there’s something unique about the brains that we all possess. Or if intelligence is merely a matter of the amount of compute power that an AI can consume. It’s more or less certain that given time machines will be more capable than they are now. So when if ever does the intelligence Rubicon get crossed?

The current AI systems can be broadly classified as Large Language Models or LLMs for short, and James explains what these are and how they work. How can they create a sentence word by word if they don’t have an understanding of where each sentence is going to end up?

James explains that LLMs are a little more complex than just handling one word at a time, always moving backwards and forwards within their predictions to ensure that they’re creating content which makes sense, even if it’s not always factually accurate.

We then move on from the conceptual understanding of AI to more concrete ways it can be implemented. What ways can WordPress users implement AI right now? And what innovations might we reasonably expect to be available in the future? Will we be able to get AI to make intelligent decisions about our websites SEO or design, and therefore be able to focus our time on other more pressing matters?

It’s a fascinating conversation, whether or not you’ve used AI tools in the past.

If you’re interested in finding out more, you can find all the links in the show notes by heading to WPTavern.com forward slash podcast. Where you’ll find all the other episodes as well.

And so without further delay, I bring you James Dominy.

I am joined on the podcast today by James Dominy. How are you doing James?

[00:04:51] James Dominy: I’m well, thanks. Hi Nathan. How are you doing?

[00:04:53] Nathan Wrigley: Yeah, good, thanks. We’re at WordCamp Europe. We’re upstairs somewhere. I’m not entirely sure where we are in all honesty. The principle idea of today’s conversation with James is he’s done a presentation at WordCamp Europe all about AI. Now, I literally can’t think of a topic which is getting more interest at the moment. It seems the general press is talking about AI all the time.

[00:05:17] James Dominy: Yeah.

[00:05:17] Nathan Wrigley: It’s consuming absolutely everything. So it’s the perfect time to have this conversation. What was your talk about today? What did you actually talk about in front of those people?

[00:05:24] James Dominy: Right. So my talk was about the influence of AI on the WordPress community. The WordPress community involving, in my mind, roughly three groups. You’ve got your freelancer, single content generator, blogger. You have someone who does the same job but in a business as in an agency or a marketing or a brand context. And then on the other side, you’ve got software developers who are developing plugins or working on the actual WordPress Core.

And AI is going to be changing the way all of those people work. Mostly I focused on the first and the third groups. I don’t know enough about the business aspects to really talk about the agency and the marketing side of things.

I personally, I’m a software developer, so I suppose I really skewed towards that in the end. But, my wife has been a WordPresser for 15, 20 years, which is how I ended up doing this. And a lot of the things that she’s been using ChatGPT quite actively recently.

And she’s been chatting to me after work going, you know, I was trying to use ChatGPT to do X Y Z. And I thought, well, you know, that’s interesting. I know some bit about machine learning and the way these things work. I’ve read some stuff on the internals and I have opinions.

[00:06:33] Nathan Wrigley: Perfect.

[00:06:34] James Dominy: So that’s how I got here.

[00:06:35] Nathan Wrigley: Yeah. Well, that’s perfect. Thank you. It seems like at the moment the word ChatGPT could be easily interchanged with AI . Everybody is using that as the pseudonym for AI and it’s not really, is it? It really is a much bigger subject. But that is, it feels at the moment, the most useful implementation in the WordPress space. You know, you lock it into the block editor in some way shape and you create some content in that way.

[00:07:00] James Dominy: And I mean, I am absolutely guilty of that. I think the number of times I’ve said ChatGPT, I mean AI generative systems, or something during my workshop this morning is well beyond count.

it is likely to fall victim of a trademark thing at some point. Like Google desperately tries to claim that Google is a trademark and shouldn’t be used as a generic term for search. I expect the same thing will happen with ChatGPT at some point.

[00:07:25] Nathan Wrigley: This is going to sound a little bit, well, maybe snarky is the wrong word, but I hope you don’t take it this way, but it feels to me that the pace of change in AI is so remarkably rapid. I mean, like nothing I can think of. So, is there a way that we can even know what AI could look like in a year’s time, two years’ time, five years’ time? So in other words, if we speculate on what it could be to WordPress, is that a serious enterprise? Is it serious endeavor? Or are we just hoping that we get the right guess? Because I don’t know what it’s going to be like.

[00:07:59] James Dominy: I think if we rephrase the question a bit, we might get a better answer. So AIs are human design systems. And there is a thing called the alignment problem where there is an element of design to AIs, and we give it a direction, but it doesn’t always go the direction we want and I think that is the unanswerable part of this question.

Yes, there are going to be emergent surprises from the capabilities of AIs. But for the most part, AIs are developed with a specific goal in mind. Large language models were developed, okay I’m taking a wild educated guess here perhaps, but they were developed with the idea of producing text that sounded like a human. And I mean, we’ve had the Turing test for nearly a hundred years, more than a hundred years? 21, yeah, more than a hundred years now.

So I mean, that’s been a goal for a hundred years. Everyone says that AI has advanced rapidly and it has, but the core mathematical principles that are involved, those haven’t advanced. I don’t want to take away from the people who’ve done the work here. There has been work that’s been put into it, but I think what’s really given us the quantum leap here is the amount of computational power that we can throw at the problem.

And as long as that is increasing exponentially, I think we can expect that the models themselves will get exponentially better at roughly the same rate as the amount of hardware we throw at it.

[00:09:28] Nathan Wrigley: So we can stare into the future and imagine that it’s going to get exponentially, logarithmically it’s going to, it’s just going to get better and better and better. But we can’t predict the ways that it might output that betterness. Who knows what kind of interface there’ll be, or.

[00:09:41] James Dominy: Yeah. I think better’s a very evasive term perhaps, on my part. I think there are specific ways that it is going to get better. For example, we are going to see less confused AIs, because they are able to process more tokens. They have deeper models. Deeper statistical trees for outputs. They’re able to take more context in and apply it to whatever comes out. So in that sense we’re going to see a better output from an AI. Is it going to ever be able to innovate? Ooh, that’s a deep philosophical question, and I mean we can get into that, but I don’t know that we have time.

[00:10:20] Nathan Wrigley: I think I would like to get into that.

[00:10:22] James Dominy: Okay.

[00:10:22] Nathan Wrigley: Because when we begin talking about AI, I think the word which sticks is intelligence. The artificial bit gets quickly forgotten and we imagine that there is some kind of intelligence behind this, because we ask it a fairly straightforward, or even indeed quite complicated question.

And we get something which appears to pass the Turing test. Just for those people who are listening, the Turing test is a fairly blunt measure of whether you are talking to something which is a human or not human, masquerading as a human. And if something is deemed to have passed the Turing test, it’s indistinguishable from a human.

And so, I have an intuition that really what we’re getting back, it’s not intelligent in any meaningful sense of the word. It’s kind of like a regurgitation machine. It’s sucking in information and then it’s just giving us a best approximation of what it thinks we want to hear. But it’s not truly intelligent. If you asked it something utterly tangential, that it had no capacity, it had no data storage on, it would be unable to cope with that, right?

[00:11:22] James Dominy: I think yes. If you can clearly delineate the idea of, we have no data on this, which is very difficult considering the amounts of information that, you know, give something access to Wikipedia and that AI generative system might well be able to produce an opinion on practically anything these days.

But if it hasn’t read the latest paper on advanced quantum mechanic theory, it’s not going to know it. That text isn’t going to be there. Could it reproduce that paper? That’s a subtely different question, because then it comes down to, well, when a human produces that paper, what are they really doing?

They’re synthesizing their knowledge from a bunch of different things that they’ve learned, and they’re producing text in a language, in a grammar, that they have learned in a very similar way, that statistically speaking this sentence follows this grammatical form. Because I have learned that as a child through hearing it several thousand times from the people around me and my parents. What’s different?

A more practical example here, I was having this discussion earlier today, and someone said yes, but they’re not truly intelligent. But if you consider it, even now, we can ask Chat GPT something, and I’m going to be abstract cause I don’t have a concrete example here, I’m sorry. But we can say to ChatGPT, I want you to produce a poem in the style of Shakespeare, a sonnet or something. But I want you to use a plot from Goethe.

Okay, fine. Now it can do that. It can give you a response. I’m not sure that it’ll be a good response. I haven’t tried that particular one. But in that context, if you are asking a human to do that, and we automatically make the assumption of other human beings that they understand. And, sorry, I’m making air quotes here. That they understand, in quotes, who Goethe is. That that is a person and a character. That Goethe has a particular style and a proclivity for a certain pattern in his plots.

And that those are all, to use a computer science term, symbolic representations. Abstract concepts. So is ChatGPT actually understanding those abstract concepts? Does it understand that Goethe is a person? Educated guests here, probably not. But it does understand that Goethe refers to a certain, can draw a line in all the stuff that it has learned and know this is Goethe.

It has a concept of what it thinks Goethe is. Then from there it can say, and he has done work on the following things, and these are plots. And so it kind of understands. There’s another line there about what a plot is, which is a very abstract concept.

Does that mean it’s intelligent? Does that mean it understands? I don’t know. That’s my answer because I did biochemistry at university, and there’s also the question there, and it’s exactly the same question. It’s at what point do the biological machines, the biochemical machines, your actual proteins and things that are obviously on their own, unintelligent, and yet when they act in concerts, they produce a cell, and a living being.

Where does that boundary exist? Is it gray? Is it a hard line? And the same for me is true of the intelligence question here. Intelligence is a, it’s an aglomeration of lots of small, well-defined things that when they start interacting, become more than the sum of their parts. Does it come down to the Turing test? I mean, the fact that people on support, little support popups on the web, have to ask, are you a human every now and then. It immediately says, we have AIs that have passed the Turing test long ago.

But here in this case, like the extended Turing test is the thing actually intelligent? I don’t know. I genuinely don’t know the answer there. In some sense, yes, because it’s doing almost the same thing as we are, just in a different, with different delineations and different abstractions, but the process is probably the same.

[00:15:33] Nathan Wrigley: Given that you’ve got a background in, forgive me, did you say biochemistry?

[00:15:37] James Dominy: Yeah, biochemistry and computer science, bioinfomatics.

[00:15:39] Nathan Wrigley: Yeah, do you have an intuition as to whether the substrate of the brain has some unique capacity that can lock intelligence into it? In other words, is there a point at which a computer cannot leap the hurdle? There’s something special about the brain, the way the brain is created? This piece of wetwear in our head.

[00:16:00] James Dominy: Unpopular opinion, I think it comes down to brute force count. We’ve got trillions of cells. Large language models, I don’t know what the numbers are for GPT4, but we’re not at trillions yet. Maybe when we get there, I don’t know where the tipping point is, you know. Maybe when we get to tens of billions, or whatever number it happens to be, is the point where this thing actually becomes intelligent.

And we would be unable to distinguish them from a human, other than the fact that we’re looking at a screen that, that we know it’s running on the chip in front of us. But if it’s over the internet and it’s on a machine running, or whether we’re talking to a person in the support center. Or we are at the McDonald’s kiosk of 2050 and being asked whether we want fries with that. If we can’t see the person who’s asking the question, if we’re at the drive-through, we can’t see the person. Do we care?

[00:16:54] Nathan Wrigley: Interesting. You mentioned a couple of times large language models, often abbreviated just to LLM. My understanding at least, forgive me I’m, I really genuinely am no expert about this. This is the underpinning of how it works. I’m going to explain it in crude terms, and then I’m hoping you’ll step in and pad it out and make it more accurate.

[00:17:12] James Dominy: I should caveat anything that I say here with I also am not an expert on these, but I will do what I can.

[00:17:17] Nathan Wrigley: So a large language model, my understanding is that things like ChatGPT are built on top of this, and essentially it is vacuuming up the internet. Text, images, whatever data you can throw at it. And it’s consuming that, storing that. And then at the point where you ask it something, so write a sonnet in the style of Goethe, written by Shakespeare. It’s then making a best approximation, and it’s going through a process of, okay, what should the first word be? Right, we’ve decided on that. Now, let’s figure out the second word, and the third word and the fourth word. Until finally it ends in a full stop and it’s done.

And that’s the process it’s going through. Which seems highly unintelligent. But then again, that’s what I’m doing now. I’m probably selecting in some way what the next word is and what the next word is. But yeah, explain to us how these large language models work.

[00:18:03] James Dominy: I think that’s a pretty fair summation. I think the important bit that needs to be filled in there is that what we perceive and use as customers of AI systems in general is a layer of several different models. There is a lot of pre-processing that goes into our prompts and post-processing in terms of what comes out.

But fundamentally the large language model is, yes, it’s strings of text generally. There are different systems that the AI images, image systems, are a different form of maths. Most of them, at least the ones that I know of, are mostly based on something called Stable Diffusion.

We can chat about that separately, but large language models tend to be trained on a large pile of text where they develop statistical inferences for the likelihood of some sequence of words following some other sequence of words. So as you say, like, if I know that a pile of words were written by Goethe, then I can sub select that aspect of my trained data.

And I’m personifying an AI here already. The AI can circumscribe, isolate a portion of its training set, and say, okay I will use this subset of my training, and use the statistical values for what words follow what other words that Goethe wrote. And then you will get something in the style of Goethe out.

[00:19:29] Nathan Wrigley: It’s kind of astonishing that that works at all. That one word follows another in something which comes out as a sentence because, I don’t know if you’ve ever tried that experiment on your phone where you begin the predictive text. On my phone there’s there’s usually three words above the little typewriter, and it tries to say what the next word is based upon the previous word.

[00:19:49] James Dominy: It’s not called auto corrupt for nothing.

[00:19:50] Nathan Wrigley: Yeah, so you just click them at the end of that process, you have fantastic gibberish. It’s usually quite entertaining, and yet this system is able to, in some way just hijack that whole process and make it so that by the end the whole thing makes sense in isolation.

It is Goethe. It looks like Shakespeare, sounds like Shakespeare, could easily be Shakespeare. How is it predicting into the future such that by the end, the whole thing makes sense? Is there more processing going on than, okay, just the next word. Is it reading backwards?

[00:20:22] James Dominy: Yes absolutely. Again, not an expert on LLMs, but there is this thing called a Markov Model. Which is a much more linear chain. It’s used often for bioinformatics, for genome and predicting the most likely next amino acid or nucleic acid in a genomic or a proteomic sequence.

And so Markov Models are very simple. They have a depth and that is how much history they remember of what they’ve seen. So you point a Markov Model at the beginning of the sequence of letters of nucleic, the ACGT’s. And then you want to say, okay, I’ve managed to sequence this off my organism. I’ve got a hundred bases and I want to know what the most likely one after that is, because that’s where it got cut off.

You give it a hundred, maybe you have a buffer of 10. So it remembers the last ten. It sort of slides this window of visibility over the whole sequence and mathematically starts working out, you know, what comes after an A? Okay, 30% of the time it’s a C. 50% of the time it’s a G. And by the end of it, it can with reasonable accuracy to some value of how much information you’ve given it, predict okay, in this particular portion of 10 that I’ve seen, the next one should be T.

And they get better as you give them more and more information. As you give them a bigger and bigger window. As you let them consume more and more memory whilst they’re doing their job, their accuracy increases.

I imagine the same is true of large language models, because they do. They don’t just predict the next word, they operate on phrases, on whole sentences. At some point, maybe they already do, but I imagine they operate on whole paragraphs. And again, it depends on what you’re trying to produce. Like if you’re trying to produce a legal contract that’s got a fairly prescribed grammar and form to it. And you know, then like statistically you’re going to produce the same paragraph over and over again because you want the same effect out of contracts you do all the time.

[00:22:22] Nathan Wrigley: You described this slider. That really got to the nub of it. I genuinely didn’t realize that it wasn’t doing any more than just predicting the next word. And because that’s the way I thought about it, I thought it was literally astonishing that it could throw together a sentence based upon just the next word, if it didn’t know what two words previously it had written.

It’s back to my predictive text, which produces pure gobbledygook. But it still, occasionally, it goes down a blind alley, doesn’t it? Because although that is, presumably 99 times out of a hundred that will lead to a cogent sentence, which is readable. Occasionally it does this thing, which I think has got the name hallucinate, where it just gets slightly derailed and goes off in a different direction. And so produces something which is, I don’t know, inaccurate, just nonsense.

[00:23:06] James Dominy: Yes. Well known for being confidently wrong for sure. I’ve experienced something similar, and I find that it is especially the case where you switch contexts. Like when you are asking it to do more than one thing at a time, and you make a change to the first thing that you expect to carry over into the context of the second task, and it just doesn’t. It gets confused.

And then the two things, this is especially true in coding, where you ask it to produce one piece of code and a function here, and another piece of code and a function on the other side. And you expect them, those two functions to interoperate correctly. Which means that you have to get the convention, the interface between those two things, the same on both sides.

But if you say, actually, I want this to be called Bob, that doesn’t necessarily translate. Again, I suppose this is my intuition. There are a lot of ways that that failure can happen. The most obvious one is that you’re doing too much and it’s run out of tokens.

Tokens are sort of an abstraction. Sorry I used that word a lot. Computer scientist. Tokens are, they’re not strictly speaking individual words, but they are a rough approximation of a unit of knowledge, context. I don’t know what the right word here. They chose token, right? So, if you use the API for ChatGPT, one of the things that you pass is how many tokens is the call allowed to use?

Because you are charged by tokens. And if you say only 30 tokens, you get worse answers than if you give it an allowance of a hundred tokens. Meaning that you might have given it a problem that exceeds the window that I was describing earlier. That sort of backtrack of context that it’s allowed to use.

Or you give it to two contexts and together they just go over and then it’s confused because it doesn’t know which, again, I say this as a semi-educated guess. We as humans don’t have a good definition of what context means in this conversation. How do we expect a computer system to?

[00:25:05] Nathan Wrigley: Just as you’ve been talking, in my head, I’ve come up with this analogy of what I now think AI represents to me, and it represents essentially a very, very clever baby. There’s this child crawling around on the ground, I really do mean an infant who you fully forgive for knocking everything over and, tipping things over, damaging things and what have you. And yet this child can speak. So on the one hand, it can talk to you, but it’s just making utterly horrific mistakes because it’s a baby and you forgive it for that. So I don’t know how that sits, but that’s what’s it landed in my head.

[00:25:40] James Dominy: I wouldn’t say that AI is in its infancy anymore, but it’s probably in its toddler year, and maybe we need to watch out when it turns two.

[00:25:47] Nathan Wrigley: So we’ve, done the sort of high level what is AI and all of that. That’s fascinating. But given that this is a WordPress event and it’s a WordPress podcast, let’s bind some of this stuff to the product itself. So WordPress largely is a content creation platform. You open it up, you make a post, you make a page, and typically into that goes text, sometimes images, sometimes video, possibly some other file formats. But let’s stick with the model of text and images. Why do we want, or how could we put AI into WordPress? What are the things that might be desirable in a WordPress site that AI could assist us with?

[00:26:21] James Dominy: I am totally going to be stealing some ideas from the AI content creation things that have happened this morning. I mean, there’s the obvious answer. I need to generate a thousand words for my editor by 4:00 PM today. Hey, ChatGPT, can you generate a thousand words on topic, blah?

I think there are a lot of other places. I’d be super surprised if this hasn’t actually happened already. But, hey ChatGPT, write me an article that gets me to the top five Google ranking.

The other obvious place for me as a software developer is using it to develop code. Humans are inventive. We’re going to see a lot of uses for AI that we never thought of. That’s not a bad thing at all. The more ways that we can use AI, I think the better.

Yes, there are questions about the dangers, and I’m sure that’s a question coming up later on, so I won’t dive into them now, but in the WordPress community, there’s content creation, but there’s also content moderation, where AI can probably help a lot. Analyze this piece of text to me and tell me is it spam? Does it contain harmful or hateful content?

Again, it’s a case of you get what you give. There’s that story about Microsoft, I think it was Microsoft, and the chatbot that turned into a horrible Nazi racist within about two hours, having been trained on Twitter data. We need to be careful about that, certainly. I’m struggling to think of things beyond the obvious.

[00:27:47] Nathan Wrigley: Well, I think probably it is going to be the obvious, isn’t it? Largely, people are popping in text and so having something which will allow you within the interface, whether you are in a page builder or whether you’re using the Gutenberg editor, the ability to interrupt that flow and say, okay, I’ve written enough now, ChatGPT, take over. Give me the next 300 words please. Or just read what I’ve written and can you just finish this? I’m almost there.

[00:28:11] James Dominy: Yeah, we are doing it already, even if it’s a sort of fairly primitive flow now where we write some stuff in our block editor, copy it up, pop it in ChatGPT or Bard or whatever, and say, hey, this is too formal. Or this is not formal enough. And it’s really great at that. Make this sound more businessy. And it understands the word businessy. The tool integration, it’s obvious in a lot of ways, but I think there are going to be a lot of non-obvious integrations. Like, oh wow, I wish I thought of that, and, you know, made my millions off that product. I mean, Jetpack is doing it already, you know. I am able to actively engage with ChatGPT whilst I’m editing my blog post. Fantastic.

Another thing that I’ve just thought of is oh, I run a WooCommerce site and I want to use, not necessarily ChatGPT, but some other AI system to analyze product sales and use that to promote, to change the listing on my product site, so that I can sell more product. That’s going to happen.

[00:29:09] Nathan Wrigley: Yeah, given that it’s incredibly good at consuming data.

[00:29:13] James Dominy: Yeah, or even generating it on the fly. Generate 300 different descriptions of this product and randomize them. Put them out there and see which one sells best. We are doing that manually already. It’s AB testing at a larger scale.

[00:29:28] Nathan Wrigley: Yeah. You can imagine a situation where the AI runs the split test, but it’s divided over 300 variations. And it decides for itself which is the winner.

[00:29:39] James Dominy: On a day-to-day basis.

[00:29:40] Nathan Wrigley: On an hourly basis. Implements the winner and then begins the whole process over and over again. I also wonder if in WordPress there is going to be AI to help lay out things. So at the moment we have the block editor. It enables you to create fairly complex layouts. We also have page builders, which allow us to do the same thing. So it alludes to what I was speaking about a moment ago.

Talking, so literally talking, as well as typing in. I would like a homepage. I would like that homepage to show off my plumbing business, and here’s my telephone number. I’d like to have a picture of me, or somebody doing some plumbing, some additional content down there. You get the picture?

[00:30:17] James Dominy: Yeah, absolutely.

[00:30:18] Nathan Wrigley: A few little prompts, and rather than spitting out text or an image, whole layouts come out. And we can pick from 300 different layouts. I’ll go for that one, but now make the buttons red. The AI takes over the design process in a way.

[00:30:32] James Dominy: Yeah. I’m going to confess here that I’m absolutely stealing this opinion from the AI panel earlier. I think the danger for WordPress specifically there, is that that level of automation for us with human engagement and, you know, developing something through conversation with an AI, might actually skip WordPress entirely. Why must the AI choose WordPress to do this?

Maybe if we as a WordPress community invest in making WordPress AI integrated, then yeah, absolutely. Then hopefully we’re first to market with that in a way. And then it will generate stuff in WordPress. But there’s no, there’s no reason for it to maybe choose a Wix page as a better solution for you as a plumber, who doesn’t update things very often. You just want a static, you know.

Chances are it’ll just say, here is some HTML it does the job for you, it’s pretty. I made some images for you as well. And, all you need to do is run the sequence of commands to, SSH it up to provider of your choice. Or I have selected this provider because I know how much they all charge and this is the cheapest. Or you’ve asked for the fastest, whatever.

[00:31:41] Nathan Wrigley: Oh, interesting, okay. So it’s not just bound inside the WordPress interface. Literally, put this in the cheapest place as of today. And then if it changes in the next 24 hours, just move it over there and change the DNS for me and.

[00:31:53] James Dominy: One day. For sure. Yeah.

[00:31:54] Nathan Wrigley: Okay. So that very nicely ties into the harms.

[00:31:58] James Dominy: There it is.

[00:31:58] Nathan Wrigley: What we’ve just laid out is potentially quite harmful to a lot of the jobs that people do inside of WordPress. We’ve just described a workflow in which many of the things that we would charge clients for, which we could potentially get AI to do. Whether that’s a voice interface or a visual interface or a type, we’re typing in.

So that is concerning, if we are giving AI the option to put us out of work. And I know at the moment, this is the hot topic. I’m pretty sure that there’s some fairly large organizations who have begun this process already. They’ve taken some staff who are doing jobs which can be swapped out for AI, and they’ve shed those staff.

And whilst we’re in the beginning phase of that, it seems like we can swallow so much of people getting laid off. The problem, potentially is, if we keep laying people off over and over and over again and we give everything over to the AI, we suddenly are in a position where, well, there’s no humans in this whole process anymore. Does any of that give you pause for thought?

[00:32:53] James Dominy: Yeah, it certainly does. I think we should temper our expectations of the capabilities of AI. So there’s a technical term called a terminal goal. The delineation between specific artificial intelligences and machine learning, in that world, and the concept of the general artificial intelligences, which is what everyone thinks of when they think of the I in artificial intelligence, is an AI that is capable of forming its own terminal goals.

Its own, don’t get me wrong, like we have AIs that are capable of forming what are called intermediate nodes. If you tell an AI of a particular type to go and do a particular thing, then it is capable of forming intermediate steps. In order to do the thing you’ve told me, I need to first do this, which requires me to do that. And, you know, it forms a chain of goals, but none of those goals are emergent from the AI. They are towards a goal we have given the AI externally.

That ability to form a goal internally is the concept of a terminal goal. And we don’t have, large language models don’t have terminal goals. Large language models, stable diffusion, all of the different algorithms that are hot topics today, are all couched within the idea of solving a problem given to them as an input.

Which means there’s always going to need to be a human. At least with what we’ve got now. No matter how good these models get, how much brain power we give them. And this maybe is going against what I said earlier of like, I think it’s probably a quantity thing.

Maybe there’s a tipping point. Maybe there’s a tipping point where the intermediate goal that it forms is indistinguishable from a terminal goal in a human brain. But for the moment, I think there always needs to be a human there to give the AI the task to solve. Open AI isn’t just running servers randomly just doing stuff. It spends its computational time answering users prompts and questions.

[00:34:48] Nathan Wrigley: So if we pursue artificial intelligence research, and the end goal is to create an AGI, then presumably at some point we’ve got something which is indistinguishable from a human because it can set its own goals.

[00:35:02] James Dominy: The cyberpunk dystopia, right?

[00:35:03] Nathan Wrigley: But we’re not there yet. This is a ways off, my understanding at least anyway. But in the more short term, let’s bind it to the loss of jobs.

[00:35:11] James Dominy: In my workshop this morning, I think the primary point that I wanted to get across is, if you are currently in the WordPress community, employed and or making an income out of WordPress. ChatGPT, Bard, generative AI, large language models are a tool that you should be learning to use. They’re not going to replace you.

Maybe that’s less true on the content generation side, because large language models are particularly good at that. But there’s a flip side to that because on the software development side, programming languages have very strict grammars, which means the statistical model is particularly good at producing output for programming languages.

It’s not good at handling the large amounts of complexity that can exist in large pieces of code. But equally so, I mean, if you ask it to give you a hundred items of things to do in Athens, whilst I’m totally, totally, working hard at a conference, uh, then you are probably going to get repeats. You might run into the confusion problem, the hallucination issue at some point there, where just a hundred is too much.

Nobody has ever written an article of a hundred things to do in Athens in a day. I don’t know, I haven’t tried that. I’m guessing that there are going to be limitations. So some jobs are more in threat than others, but I think that if you’re already in the industry, or in the community and working with it, go with it and, absorb the tools into your day-to-day flow.

It’s going to make you better at what you do. Faster at what you do. Hopefully able to make more money. Hopefully able to communicate with more people, translations et cetera. Make your blog multilingual. There are a lot of things that you can use it for that aren’t immediately coming after your job.

The problem for me, and this again is the point that I was trying to get across in the workshop, the problem is the next generation. The people who are getting into WordPress today and tomorrow, and in six months time. Who are coming into a world where AI is already in such usage that it’s solving the simple problems. And the same as true, my editor wants 200 words or whatever on fun things to do in Athens overnight.

Okay, great. ChatGPT can do that for the editor. Why does he need a junior content writer anymore? But the problem is, I mean, we’ve already said, sometimes it’s spectacularly wrong. Does that editor always have the time to actually vet the output? Probably not. And so the job of that junior is going to transform into, they need to be a subeditor. They need to be a content moderator almost, rather than a content generator.

But that’s a skill that only comes from having written the content yourself. We learn by making mistakes, and if we are not making those mistakes because AI is generating the stuff, and either not making mistakes or making mistakes that we haven’t made before ourselves, and thus don’t recognize his mistakes. So my fear of the job losses aspect of AI is not that it’s going to wipe out people who are working already. It’s going to make that barrier to entry for the next generation, it’s knocking the bottom rung out of the ladder.

And unless we change the ways that we teach people as they are entering the community, the WordPress community, the industry, and all the industries which AI is going to affect, the basics, and we focus on it. You know, it’s a catch 22. We have to teach people to do stuff without AI, so they can learn the basics. But at the same time, they also have to learn how to use AI so they can do the basics in the modern world.

And I mean, we get back to that old debate like, why am I learning trigonometry in school? Because maybe someday it actually helps you do your job. Admittedly, so far, not so much. But I will say this. History, I did history in school. That has surprisingly turned out to be one of the most useful subjects I ever did, just because it taught me how to write. Which I didn’t learn in English class. Go figure.

[00:39:17] Nathan Wrigley: It sounds like you are quite sanguine for now. If you are in the space and listening to this podcast now, everything is fine right now.

[00:39:26] James Dominy: Yeah.

[00:39:27] Nathan Wrigley: Maybe less sanguine for the future. Given that, do you think that AI more broadly needs to be corralled. There need to be guardrails put in place. There needs to be legislation. I don’t know how any of that works, but manufacturers of AI being put under the auspices of, well it would have to be governments, I guess. But some kind of system of checks and balances to make sure that it’s not, I don’t know, deliberately producing fakes. Or that the fakes are getting, the hallucinations are getting minimized. That it’s not doing things that aren’t in humanity’s best interests.

[00:39:59] James Dominy: Absolutely. Yes. Although I’m not sure how we could do a good job of it, to be fair. The whole concept of, we want AIs to operate in humanity’s best interests. Who decides? The alignment problem crops up here where, it’s well known that we can train an AI to do something we think that it’s going to do, and it seems to be doing that thing until suddenly it doesn’t.

And we just get some weird output. And then when we go digging, we realize actually it was trying to solve an entirely different problem to what we thought we were training it on, that just happened to have a huge amount of overlap with the thing that we did. But when we get to those edge cases, it goes off in what we think is a wildly wrong direction. But it is solving the problem that it was trained to solve. We just didn’t know we were training it to solve that problem.

As far as regulation goes. Yes, I think regulation, it’s coming. I really want to say nobody could be stupid enough to put weapons in the hands of an AI. The human race has proved me wrong several thousand times already in history. Yeesh, I personally think that that’s an incredibly stupid idea. But then the problem becomes what’s a weapon?

Because a weapon these days can be something as subtle as enough ability to control trading, high frequency trading. Accidentally crash a stock market. It’s already happened. Accidentally, and again, I’m air quoting the accidentally here, accidentally crash your competitor’s stock, or another nation’s stock market. AI is there, is being used as a validly useful tool to participate in the economy, but the economy can be used as a weapon.

Putting AI in control of the water infrastructure in arid countries. Optimization, it can do those jobs a lot better. It can see almost instantaneously when there’s a pressure drop. So there’s a leak in this section of the pipe. Somebody needs to go fix it. And also it can just shut off the water to an entire section of the city because, I don’t know, it feels like it. Because for some reason it is optimizing for a different goal than we actually think we gave it.

The trick is we can say, we can input into ChatGPT, I want you to provide water to the entire city in a fair and equitable way. That doesn’t mean that’s what it’s going to do. We just think that that’s what it’s going to do. We hope.

[00:42:26] Nathan Wrigley: I think we kind of come back to where we started. If we had a crystal ball, and we could stare five, two years, three years, 10 years into the future. That feels like it would be a really great thing to have at the moment. There’s obviously going to be benefits. It’s going to make work certainly more productive. It’s going to make us be able to produce more things. But as you’ve just talked over the last 20 minutes or so, there’s also points of concern and things to be ironed out in the near term.

[00:42:52] James Dominy: Absolutely, yeah.

[00:42:53] Nathan Wrigley: We’re fast running out of time, so I think we’ll wrap it up if that’s all right? A quick one James, if somebody is interested, you’ve planted the seed of interest about AI and they want to get in touch with you and natter about this some more, where would they do that?

[00:43:06] James Dominy: The best way is probably email. I am not a social person in the social media sense. I don’t have Twitter. I don’t do any of that. So I’m probably terrible for this when I think about it. My email is, J for Juliet, G for golf, my surname D O M for mother, I, N for November, Y for yankee at gmail.com. Please don’t spam. Please don’t get AI to spam me.

[00:43:30] Nathan Wrigley: Yeah, yeah. James Dominy, thank you so much for joining us today.

[00:43:34] James Dominy: Thank you for the opportunity. It’s been great fun, and I’ve really enjoyed being able to kind of deep dive into a lot of the stuff I just had to gloss over in the workshop. Thank you.

On the podcast today we have James Dominy.

James is a computer scientist with a masters degree in bioinformatics. He lives in Ireland, working at the WPEngine Limerick office.

This is the second podcast recorded at WordCamp Europe 2023 in Athens. James gave a talk at the event about the influence of AI on the WordPress community, and how it’s going to disrupt so many of the roles which WordPressers currently occupy.

We talk about the recent rise of ChatGPT and the fact that it’s made AI available to almost anyone. In less than twelve months many of us have gone from never touching AI technologies to using them on a daily basis to speed up some aspect of our work.

The discussion moves on to the rate at which AI systems might evolve, and whether or not they’re truly intelligent, or just a suite of technologies which masquerade as intelligent. Are they merely good at predicting the next word or phrase in any given sentence? Is there a scenario in which we can expect our machines to stop simply regurgitating text and images based upon what they’ve consumed; a future in which they can set their own agendas and learn based upon their own goals?

This gets into the subject of whether or not AI is in any meaningful way innately intelligent, or just good at making us think that it is, and whether or not the famous Turing test is a worthwhile measure of the abilities of an AI.

James’ background in biochemistry comes in handy as we turn our attention to whether or not there’s something unique about the brains we all possess, or if intelligence is merely a matter of the amount of compute power that an AI can consume. It’s more or less certain that given time, machines will be more capable than they are now, so when, if ever, does the intelligence Rubicon get crossed?

The current AI systems can be broadly classified as Large Language Models, or LLMs for short, and James explains what these are and how they work. How can they create a sentence word by word if they don’t have an understanding of where each sentence is going to end up? James explains that LLMs are a little more complex than just handling one word at a time, always moving backwards and forwards within their predictions to ensure that they’re creating content which makes sense, even if it’s not always factually accurate.

We then move on from the conceptual understanding of AI to more concrete ways it can be implemented. What ways can WordPress users implement AI right now, and what innovations might we reasonably expect to be available in the future? Will we be able to get AI to make intelligent decisions about our website’s SEO or design, and therefore be able to focus our time on other, more pressing, matters?

It’s a fascinating conversation whether or not you’ve used AI tools in the past.

Useful links.

ChatGPT

Stable Diffusion

Markov Model