Welcome to Press This, the WordPress community podcast from WMR. Each episode features guests from around the community and discussions of the largest issues facing WordPress developers. The following is a transcription of the original recording.
Powered by RedCircle
Doc Pop: You’re listening to Press This, a WordPress Community Podcast on WMR. Each week we spotlight members of the WordPress community. I’m your host, Doc Pop. I support the WordPress community through my role at WP Engine, and my contributions over on TorqueMag.Io where I get to do podcasts and draw cartoons and tutorial videos. Check that out.
This week marks the 20th anniversary of WordPress. I doubt that when Matt Mullenweg and Mike Little started working on the CMS, they could have predicted that eventually WordPress would power over 40 percent of the sites on the web. Over the past year, AI powered tools like Dall-e and MidJourney have taken the world by storm, and it’s still in the early days of large language models and text-to-image generators. So it’s hard to tell which parts of this technology are actually game changing things we’ll be talking about in 20 years from now, and which parts are just novelties. But it does seem obvious that either way people are using AI already to help create content for their websites, to help build their websites, and possibly even how they’ll interact with those websites in the future.
Joe, thank you so much for joining us today. Can you just get us started by telling us how you got into WordPress?
Joe Hoyle: Sure. And, thanks for having me on. How I got into WordPress? Yeah, it was a while ago, certainly, maybe 14 years ago or something like that, maybe a little longer. I think I was just out of college building websites as you do at that kind of early inexperience level and came across WordPress.
I remember actually building a couple of sites in Joomla before that and then finding WordPress and then, thinking, this is a much better system that’s easier and smoother to use than that. And from there, I would say fairly incremental growth up until this point of bringing Human Made into existence, I think 12 or 13 years ago, and then the whole journey that that’s been as well.
Doc Pop: Has Human Made been focused on WordPress most of that time.
Joe Hoyle: Yeah. Exclusively so I suppose when me and Tom Wilmot, my brother founded the company, we were already doing WordPress development individually. You know, that was something like pre-custom post types, WordPress as a CMS was even a novel concept.
And we were doing quite a bit of work around that already with some enterprise size companies. So that’s really where we spun the company out was bringing WordPress to the enterprise, I guess, and, initially obviously creating a very small, niche WordPress agency around that.
And then have grown over time in terms of the amount of people that work with us now and the size of projects we do and all that kind of thing.
Doc Pop: Well, I’m really excited to have you on the show today because you’ve seen things in WordPress and you’ve been around for a while, and I’m really kind of curious to hear your thoughts on this.
And the impetus for having you join us on this show today is because Human Made is doing a conference coming up. It’s an online conference called AI for WordPress. You have many great speakers lined up and it sounds like it’s gonna be an interesting conversation happening on May 25th online. If people are interested, they can go to humanmade.com to find out more information about that.
But I would like to hear what should agencies know about AI? What is a top level thing that you think just getting started if an agency’s thinking about exploring AI.
Joe Hoyle: Yeah. Well I think for WordPress agencies like Human Made specifically where our majority of our skillset, I suppose, the bread and butter of that industry is in building WordPress solutions.
Is AI that kind of thing? Is it something a lot deeper than that, I suppose, in terms of we should see AI as more of an augmenter to everything that we do, and that could be the technology and solutions we build, but also we don’t really know how deep or wide reaching that is yet, I think in terms of just how it changes everybody’s job and all the way through to like a wide, like societal change, for example.
So where exactly AI stops, I don’t really know but I can already see obviously applications for AI, both in terms of how we build solutions with WordPress for customers and also how customers’ expectations of,technology is also shifting quite quickly with seeing what people can now do with computers when they’re assisted by AI.
So I feel like it’s maybe changing from both of those angles, which is fairly certain, which is the work that we do as developers and engineers is undoubtedly gonna change. I mean, one of the first places where these, Large Language Models have gone is actually writing code, which might have actually been quite surprising. It might have been one of those things that you had put quite low down the list of industries to be disrupted by AI initially.
That’s where I guess I can see it playing out in the beginning. Where will we be in one year or longer? It’s difficult to say because the technology itself is evolving fairly rapidly and at the same time, I feel as humans everywhere are also still adjusting to the current point that the technology is at as well. Like we are probably lagging a little bit right in our like adoption or understanding the implications of what just being able to even just generate very natural human language has given us the capabilities to do so. I feel like longer term it’s kind of anybody’s guess in terms of where we end up.
Are they needing to learn machine learning and Python?
Joe Hoyle: Right. I think that’s a great question, and I will be speaking a little bit about this on Thursday as well. There’s a big spectrum when it comes to AI, and that’s really morphed out of these more recent advances in machine learning and large language models as well.
So, it’s difficult to know how deep you kind of need to go into this, I suppose, and I think Matt’s call to really encourage people to use these tools in what way they can to become familiar with them is certainly a good first step because it just creates familiarity and it’s not that everybody needs to become a kind of like neural network computer scientist overnight or anything like that.
I think initially using them for your own work or even not work activities that you’re doing give you some level of familiarity. So at least you are not completely unaware of what these tools are capable of and what they can help with and that kind of thing. I think beyond that as engineers and I think the definition of who engineers are is probably gonna shift with this as well. Cause there’s so much capability in the AI models that can help you there.
There’s certainly two sides that I see to that. One is using AI to improve your own. “Okay, I used to write code, now I’m gonna write code with the help of an AI, and Google’s Bard and ChatGPT and ChatGPT plugins on top of that, and Co-pilot from GitHub, and Co-pilot X, which is I think around the corner.”
All of those tools can increase your productivity as a programmer. But that again is, I would say is getting up to speed as much as an accountant might need to get up to speed with how AI can impact their jobs and the work that they’re doing. So I’d say again, that’s kind of table stakes, I think to some degree is to understand how these tools can help us as developers, programmers, in terms of how those tools can be applied there.
But then there’s kind of the other side of creating solutions using these tools. And that’s where you do have this very big spectrum. Probably something between I’m gonna use an OpenAI API for generating some text, for example, and incorporating that into the software that I’m building.
And then you have a little more in depth around training pre existing models. So you’ve got things like the LAMA model from Facebook that you can train yourself. And a lot of the Google stuff that was recently unveiled is like, use our model, but allow your training on top.
So you’ve kind of got that, but then you’ve got lower levels of that, which is maybe using a lot of the open source models and running them yourself on hardware that you control. And that is kind of a case of getting Python set up and all of the beefy hardware that needs to go with that. And then you obviously have the much lower level, I would say of you’re really actually understanding the nuts and bolts of how neural nets work and, and all of that.
And I think obviously the deeper you go down this stack, the less you are gonna be relating it to your WordPress solutions and that’s always been the case with, if you just look at like the language technology stack, people use WordPress APIs, which is written in PHP, so they have some good understanding of PHP, but then PHP is written in C and WordPress developers typically don’t really need to know C at all.
And then C itself is compiling to machine code. But again, once you’re a WordPress developer you are a long distance away from needing to know anything about that. And that’s just, I guess, the same spectrum that I’m kind of seeing in merge with AI.
And it’s been, it’s been a really exciting year because the level of the APIs has got to a very high level where as a web engineer, it’s now at the point I think where they’re speaking your language in terms of like, okay, well it’s a REST API and I pass it some things, I pass it in instruction in the data, and then I get something back. So now I understand how I can work that into the application and I’ve been working with APIs for all these other things.
I feel like the last year has actually got it to the point where it’s really fairly applicable and a fairly small adjacent move and learning in technology really to take your, maybe like full WordPress stack understanding and, and expand that to being able to use some of these capabilities in solutions that you make.
Doc Pop: Well, that’s a great spot for us to take a quick break, Joe. We will be right back to talk with you about more things that agencies should keep in mind when working with AI, and so stay tuned for more Press This after the short break.
Doc Pop: Welcome back to Press This, a WordPress community podcast. I’m your host, Doc Pop, and today we’re talking with Joe Hoyle, the CTO and co-founder of Human Made about what agencies need to know when working with AI and we just ended that first segment talking about the tools that people could use and how deep maybe they need to go or don’t need to go.
In a more practical level, as someone who’s working with an agency, can you tell us, either maybe specific examples that Human Made has done, you could leave out client names if you need to, but I’d like to hear about how AI is being utilized by Human Made when working with a client right now.
Joe Hoyle: Yes, it is a good question. And I’d say for us, it’s definitely early days. I think like it is for many right now. I mean, I guess within AI there’s so much kind of hype cycle around it really. To be honest, a couple of months ago I was a little skeptical of whether this was just gonna be kind of a hype cycle that comes and goes.
And that’s kind of how I would view a lot of the let’s say crypto and Web 3 type stuff, for example. Where I was fairly skeptical there too, but once I actually got stuck into applications in terms of, okay, well what problems can large language models and the image generation models, for example, actually solve, I actually see a lot of practical benefits, let’s say.
But in terms of how that translates to client work for an agency, I mean, I think it comes from two directions as things often do in client work. Either your client is kind of like asking for something, they’ve come up with a great idea around how they’ve pieced together how maybe AI could be applied to a problem.
And they’re looking for their CMS to be able to do that. And that’s really a case of translating product owner requirements ultimately into code. And, I think that side is in some ways easier to deal with cause you understand what you’re trying to build.
We have a customer, in financial services, for example, and one specific challenge for them is around content production and essentially just like the quality of content that is being produced. And also their editorial team understanding the content that’s already been created.
And that’s an area where AI can help quite a lot, both in terms of answering questions of your existing content as you’ve probably seen the interfaces with AI around conversations is very good. So you can ask questions of your content. Have we ever written about X, Y, or Z? Or what’s the next thing that we should consider in this series of posts, that kind of thing.
And then also more at the like granular editorial level. AI tools have shown to be really useful around just being able to do editorial either rewriting to be better grammar or changing tone, SEO optimization, that kind of thing.
So we’ve got a couple of ongoing conversations around those. This is kind of what our work usually is, which is enterprise-y use cases where they’re quite specific in terms of a little bit of productivity win for our editorial team is gonna make a big difference and it’s worth the investment for us to do that.
Those things do seem to largely fall around improving productivity of content production and something to do with understanding semantic meaning and interrogation of existing content.
For those kinds of problems, I would definitely like to see, at Human Made for example, but I think this will apply to all agencies really, of when we are coming up with solutions to customer requirements, then we have AI as a part of that tool belt. So it could be like content classification problems, things like that, right?
Well we want to select the categories for this thing, or we want to do sentiment analysis on user generated content. Or we don’t want people to upload obscene images. All of those kinds of problems, being fairly aware of what you can achieve with the AI tools that are out there from a development point of view, I think is very valuable because ultimately you can really short circuit a lot of long in-depth programming work with these models.
I’ve implemented a similar algorithm many times around finding common posts by tag or something like that, right? It’s like, so I write code, I loop through the tags, I look at those tags, I look at my listed tags. I try to match them. I order that by the amount of tags that it has in common. And then I strip the ones that have already been shown on the page.
You kind of build up this whole algorithm, but really what you are actually doing there is probably approximating a requirement from a customer that’s something a little more broad around show similar content or something like that.
You can literally give something like GPT4 via the API. You just give it that human instruction and you give it your list of tags, and then it just somewhat miraculously will come up with the solution to those kinds of problems and generally a lot more accurately as well.
Because not to go too deep on kind of what programming is and all of that, but you’re always trying to kind of, you are feeding in a requirement or an outcome, and you’re ultimately having to break that down into logical rules and very definite steps.
And there’s something that can be lost in that translation. You have to choose the things that you can actually objectively implement or the area where you actually have structured data to deal with, and you don’t really go further than that. So it’s like if the right metadata isn’t there, then you just kind of make do.
Now with Large Language Models, you can feed them a lot of unstructured data and you can actually give them relatively unstructured outcomes that you’re looking for. And lo and behold, they have a very good success rate of actually interpreting that and giving you back a very good result, which is not only a lot quicker to implement, it’s actually in many cases, a lot more, I would say accurate.
But accuracy assumes that you have a very specific picture of exactly what you want. And the reality is in many cases, you don’t have that. And that’s where. Using a CMS, in the worst example, is a glorified database of hundreds of fields to fill it in because we program in very strict logical terms where it’s like everything needs to be structured in a separate field, and then really people just feel like they’re doing data entry. I see hese tools, being able to break down that barrier a lot more for us to potentially have a lot more unstructured data to potentially have interfaces that are a lot more human for our clients to use.
Like in the backend when they’re content authoring or whatever else they might be doing. Creating campaigns, events, all of these things where we are very used to having people need to follow a very strict, regimented, like, oh, well you forgot to enter the right time zone in yada yada in that box, you didn’t tick that button.
We are kind of getting to the point where we are gonna be able to provide interfaces to clients that are just a lot more natural, where they can kind of describe what they want and then we can use these Large Language Models to ultimately tie that all together in a much more coherent way.
Doc Pop: What you’re describing in that last segment, it sounds like you’re saying that AI could not just change the way that we generate content or the way that we generate code for sites, but even change the way that viewers come and use the web, is that right?
Joe Hoyle: I think so. I mean, primarily I am thinking of the content editor. How do they describe to the CMS what they want? I’d say that’s kind of a fairly boring process right now in however many fields you’re filling out. I think we are still to see exactly how this transforms like web experiences from end user’s point of view.
And again, the landscape there is gonna change and user expectations are probably gonna change. The general public are using these AI tools a lot as well. I think ChatGPT has like north of 120 million users or something.
So their expectations around whenever I see a search box, why can’t I just ask it a question and have it give me the response? That expectation I think is gonna be changing ‘cause I really do see this major shift that we’ve had in AI is just changing the way that people are interacting with their computers.
And I think we are kind of have been in this transition for a while, right? Of Siri or Alexa. People have been shown this promise of you can naturally talk to a computer. But the reality is it just hasn’t really worked and most people are still stuck in the kind of like sit down with a keyboard and a mouse and do it the traditional way.
That has definitely changed the past year and I think the user expectation is gonna be shifting with that from all of the stuff that Google unveiled a couple of weeks ago at its event in terms of like all of its products are now gonna have these capabilities in. We’re probably gonna have Apple coming next month as well.
WordPress is a CMS and the solutions that are built on it are not isolated from all of that industry change and the expectations are gonna be going up on us as a WordPress project and us as WordPress agencies in terms of what we deliver as well.
Doc Pop: And that’s a great spot for us to take one more break, and when we come back, we’ll wrap up our conversation with Joe Hoyle about AI and agencies. So stay tuned for more.
Doc Pop: Welcome back to Press This, a WordPress community podcast. I’m your host, Doc Pop, and I am chatting today with Joe Hoyle, the CTO and co-founder of Human Made about AI and its place in WordPress agencies. How can agencies think about using AI? That last segment we really got in deep. I really appreciated your thoughts there, Joe.
I’m having a hard time as a reporter being more rigorous with my definitions. I tend to say AI when I should probably be specifically saying “text-to-image generators”, or I should be saying “large language models”. Or I should be saying whatever powers the self-driving cars we see in San Francisco, we lump them together.
And I think the last thing I want to talk to you about, there’s a lot of folks who are concerned about how AI could affect them. There’s these different types of, AI, right?
The large language models and all these different things, and then these different types of concerns. So it’s hard to address them all, right? Because there’s so many different little things that people have. How are you trying to help work with people who might be hesitant about using AI in any form for their work.
Joe Hoyle: Yeah, I think it’s a really good question. Because this has, for better or worse, all been labeled AI, whereas maybe a few years ago, AI more a conscious thing that we are talking to and it’s like all seeing or knowing or whatever.
This recent advent of human language has looked so convincing in many cases though, it’s like now we call this whole thing AI. But really I think this is one part of the puzzle to this much larger journey that we are on, right? In terms of developing actual, let’s say artificial intelligence.
But nonetheless, there are still considerable safety issues I think that we need to watch out for. And I’ve definitely been a proponent of having a surface level understanding of how an LLM works, for example, because it’s really important to know that the things that a LLM will tell you isn’t necessarily factual, for example. So you definitely shouldn’t use ChatGPT just to give you a yes/no answer on a question or something like that.
They’re also trained on like half of the Internet’s worth of data, so that not only includes a huge amount of inaccurate information, let alone the fact that these models don’t actually truthfully represent the data that you feed into them. So even in an ideal world, if you had fed them, let’s say something like Wikipedia, they’re still gonna be able to hallucinate a lot of facts because they just don’t have a semantic understanding really of cohesion and contradiction and things. But they’re also trained on a lot of data that’s written by people that is very biased.
So there’s a huge amount of bias in these models. Obviously in terms of gender bias and racial bias and things like that. So I think, once you kind of understand that, of okay what is ChatGPT doing? It’s ingesting a lot of content from the internet with varying degrees and it’s jumbling all of that out, creating associations.
And ultimately there’s an algorithm that is able to print out streams of text that at a squint looks like text. Right? But it doesn’t necessarily represent a cohesive worldview. It can say things that contradict itself and all of that.
Therefore something like ChatGPT is a long way away from some kind of oracle artificial intelligence that really does know everything and can give you the right answer to anything.
Nonetheless, that doesn’t mean that they’re not useful. They can be used for a lot of useful things, but the area that I think that they’re most useful is really in cases where you have a human, which is in the loop, so to speak.
So like you are asking it to help you with writing something or make suggestions or explain a concept or something like that. And you are kind of using that as a jumping off point for further research and all of that. And really you are using it as part of your creative process in idea generation for the most part.
And that’s no different to how we might have used word clouds for brainstorming sessions and things like that. You’re really just trying to overcome some kind of writer’s block that you might have on some challenge or something like that, but you’re still keeping yourself very central.
I think there are major issues when you actually start to offload the cognitive burden of understanding, so you have it generate a thousand posts for you and you just publish them. Or you plug it into your HR pipeline to make decisions based off of how quality people might seem based off of their resumes.
In those kind of situations, you really are exposing yourself to a lot of inaccuracies and bias and misinformation and that kind of thing, where you’re gonna have it run wild.
And that is something that I do worry about a bit is how much of a race to the bottom in content creation are we gonna have?
And I think we’re still on the side of that hasn’t quite happened yet. But really quite feasibly, we could be creating double the amount of content on the internet every six months or something with just the sheer amount of content farming that you can now do. And, and that’s one specific thing that obviously I wouldn’t really like to see for the kind of signal to noise ratio to really vastly change.
I mean, it’s maybe not perfect as it is on the internet, right? But that’s just one of many challenges I think.
Doc Pop: Well, Joe, it’s been a pleasure chatting with you today. I really appreciate it. If people wanna learn more about what you’re working on, I’d recommend going to twitter.com and finding Human Made, @humanmadeLTD, all one word, or just go to humanmade.com to be able to find more about your agency and, and about the upcoming AI for WordPress conference that’s happening.
Again, that’s happening on May 25th, so if you’re hearing this, there’s possibly time that you can still sign up and enjoy that.
Doc Pop: Thanks for listening to Press This, a WordPress community podcast on WMR. Once again, my name’s Doc and you can follow my adventures with Torque magazine over on Twitter @thetorquemag or you can go to torquemag.io where we contribute tutorials and videos and interviews like this every day. So check out torquemag.io or follow us on Twitter. You can subscribe to Press This on Red Circle, iTunes, Spotify, or you can download it directly at wmr.fm each week. I’m your host Doctor Popular I support the WordPress community through my role at WP Engine. And I love to spotlight members of the community each and every week on Press This.