What is the W3C doing about AI?

Watch on YouTube

The W3C Report

It's written as an academic paper and there is a presentation on the W3C's YouTube channel. It's an open document that is first seeking community feedback via Github.

This may lead to in-depth stakeholder interviews, W3C workshops and developing a standardisation roadmap. It's looking at:

Ethical and Societal Impacts

Technical Impacts and Standardization Proposals

There are some interesting contributions on Github already. Some are questioning the threat AI poses to the Web. It does not link in the same way.

Accessibility is another. Jacob Nielsen The "king of usability" thinks usability has failed and Generative AI could do the job. Do we need alt tags and aria labels? Or language translation?

Tim Bernards-Lee AI predictions



[00:00:05] Nathan Wrigley: Welcome to the No Script show about modern web design, where we look at what we can build today with minimal software and skills. Today we’re talking about AI from the perspective of the W three C or the Worldwide Web Consortium.

Helping us with this is a new report from the W three C team called AI and the Web, understanding and Managing the impact of Machine Learning Models on the Web. Additionally, we have Tim Burners Lee AI predictions thanks to some recent comments when celebrating 35 years on the web. So David, this is a topic we both enjoy debating, so it’ll be interesting to see how our positions on AI have advanced.

And with that, over to you.

[00:00:47] David Waumsley: Yeah, well have, we’ve been talking for about two hours before this. yes. We just couldn’t, yeah. Yeah. So we caught up on that. So we’ll try and stick, I think, as best as we can to what the W three C are saying because we can go off on so many tangents. And I think there’ll be other subjects here, won’t they?

I’ll give you my quick summary and I am in the position where I think I was excited about it and I still am. But I’ve never really bought into the fact that it’s something creative, going to threaten any creative job that I might do. I quite like how the large language machines are helping me to do some really tedious tasks these days.

And I do think it’s quite interesting ’cause it feels a little bit like we’ve moved away from that and initial hype where we are more panicked about it and we’re seeing more. Of the kind of news where companies are feeling forced a little bit to exaggerate what you can do. We’re seeing more about how it might be so much more costly to make it advance in the way that we expect it to advance.

So yeah, so it’s quite interesting since the last time we spoke about there was more sort of panic over it, where it feels a bit more relaxed now.

[00:01:57] Nathan Wrigley: I, flip flop on this subject. I go from a deep admiration of AI and the things that it can do to blind panic, like chicken little, the sky is falling in. And I, really do.

And, I think, the reason is because in some areas where I honestly thought, computers, let’s call it that computers could never encroach. So the creation of artwork or the creation of songs, that kind of thing. I really did think five years ago that wouldn’t be an impossibility. There was something about the human brain, which was unique.

And then more recently when things. Which you could call art and things, which we could call music came along. It really did start to make me think, gosh, okay, this is, this is a worry. And I think the worry for me is that we’re gonna gouge out the bottom rung of the ladder. If you imagine a ladder, a career ladder if you like, and, you need people to get on the first step of the ladder in order to then progress to other things as their career goes on and what have you.

And it feels like AI has the potential to just knock out the first. Bottom two steps of the ladder because it can approximate, a, decent song, it can approximate a decent logo or whatever it might be. And so I worry that the people who might have had those jobs and therefore gained experience, they’re gonna somehow have to get above what the AI can do to be useful in a career.

And so that worries me. But then equally, I think over the last year, I’m feeling this sense of plateau in what the ais can do. it doesn’t really seem like what it could do a year ago. Is that different from what it can do now? And I expected that would be logarithmic. It would just improve at a greater rate, whereas it seems to have plateaued.

But the other thing is with technology since let’s say the turn of the millennium, so 2000, you didn’t really know what the intention was. You didn’t really have a roadmap in your head of what the internet ought to be like. It just evolved. Where whereas with ai, you can perceive what that is and that is a replacement of humans.

if you can get an ai, AI to create a perfect film, a perfect song. you know what that is already. Whereas the internet, I don’t know what that is. I never knew what it was gonna be. So anyway, so I think on balance, I’m more sanguine about it. and certainly every time I’ve used AI for the tiny little corner of the, the AI possibilities that I use it for, I’ve been profoundly impressed by what it can do.

and I have this notion in my head, which is probably self-deception in a way, because I would’ve been doing all of that work myself, and I’m not therefore doing somebody out of work. I’m just saving myself some time. I. I feel that’s a decent trade off, but obviously if that were played out across the whole world, that might not be quite such a, good endeavor.

Anyway, sorry that was a complete rant.

[00:05:07] David Waumsley: Yeah, no, it’s fine. actually there’s just, you made me think of something we didn’t even talk about before, but I think, when it might cut out that bottom layer, the fundamentals. That funny enough is where I think it fails because I think, where it’s pretty rubbish at doing stuff is like things like marking out some HML, which might be appropriate to the context it needs to be, or it’s terrible with CSS I’ve found, and I think, sometimes when it might take some of the chore out things that.

absolutely known, I used it for WordPress. Really impressive because if you like the PHP commands that you might use, the old ones for traditional WordPress are so set that there’s not much variation. So ask it to do something which I couldn’t do. It could do the code, which I couldn’t do, but I know it’s going to be correct, because there’s only so many outputs where I think, sometimes the fundamental structuring of a website or.

it just doesn’t do it. And it’s same because it’s always looking backwards on its stuff. CSS has evolved particularly so it doesn’t know the new stuff and it looks back to the majority of the old stuff, which is not the best way to do it. So, I find actually in some ways it’s, I’m going back to the basics of the very early web and trying to.

Get to understand things, which I thought I knew, but don’t. HDML and CSS particularly as it’s changing and I think it’s so bad at those things that, yeah, it, you know, I have to guide it on the fundamentals. So I, that is my worry actually with AI is that sometimes it can be used to fill in what you think is a good example for me is AI with, I was talking about this, I did a short video AI with, ALT tags for images seems to be a new thing to do that. And you think, yeah, that’s great. But if you think about it for one second, the last thing a blind person wants is, all this wonderful prose on what this image looks like. They want to know what a

[00:07:04] Nathan Wrigley: game changing, whatever. Yeah, just over superlative.

[00:07:09] David Waumsley: Yeah, but you remind me the whole No, what I’m saying is that, if you really describe a full image as a I Oh, I

[00:07:16] Nathan Wrigley: see. Yes. It won’t know when how to truncate itself or, yeah,

[00:07:20] David Waumsley: exactly. Actually what that image is doing, it wants to know what that image is doing, what parts of that image are. Are meaningful to the context in which it’s set right.

And the rest is all superfluous. Yeah. So do you see what I mean? I can see where, it’s these lower level things, some of these other jobs where I think you actually have to be a human being because you’re providing for other human beings anyway. Yeah. We’ve, no,

[00:07:42] Nathan Wrigley: no, but that’s interesting. And I, do wonder, I wonder particularly in the, with the example of CSS, because the CSS isn’t the point.

The point is what the CSS. Does, which is visual. Yes. I wonder if, that, if the, AI at the moment, these large language models, I wonder if they can make the connection, okay, here’s some code. But what does that actually end up looking is the point, and it might not be able to make bridge that gap, if Yeah. So maybe it’s, maybe that’s, I don’t know. I’m entirely ignorant of whether that’s even a thing,

[00:08:15] David Waumsley: but Yeah. And for speed, I want it to be fast and in the context of the whole site, and it just can’t do it, and I just avoid it because, it just starts hallucinating as. AI does when I start, it doesn’t know stuff.

It thinks it does, and, it, I love sending it off for down its hallucinations where it believes it’s right and it’s completely wrong, and then like to enjoy watching. But, other stuff, it’s great when, when I think when you are making the creative decisions, treating it as a. A dumb assistant I think is wonderful.

Should we bring up the notes? Because honestly we’ll just go on. Yeah, I will do that completely.

[00:08:51] Nathan Wrigley: One moment. Here we go. This is the note. if you are listening to this. Let me get the number right. ’cause in the past I’ve got the numbers of the episodes wrong. So we’re on episode number 13 1 3. And so if you go to no script show slash 13 1 3, you’ll be able to find these show notes.

Have, I’ve got the episode right for a start. Yes. You have lucky for everyone. Yeah. Okay, great. So here we are. What, where should we launch first? Yeah, this W three C report.

[00:09:20] David Waumsley: I just thought this was a good chance ’cause it’s, recently out and it’s, obviously it, the title doesn’t really beg it to be read.

It’s an academic paper. And, there is a presentation. I’ve got a link on the show notes, which goes to the YouTube, channel there. But the guy behind that, it’s maybe not, he’s, French, so if, if you can tune into his accent, you might find it quite interesting as a quick sort of summary of what they’re trying to do with that.

But my understanding is. it comes from the W three. See team, is that what they call themselves? Yeah. Yeah. And it’s, that’s really the, they’ve got about, I think 46 employees in that team. And, this is really just a first document out there, which might later form standards. So the idea is that they put this out.

At the moment it’s public, so it welcomes feedback. So it’s out on GitHub, so you can, if you think this report is missing something they should be looking into, you can add to it. And we’ll talk about that, but, We’ve got a summary. Do you wanna just run through these, Nathan? We’ve got, I’ve tried to divide it up what the report is saying.

[00:10:32] Nathan Wrigley: Okay. just to read what’s on the screen, really, ethical and societal. Societal is hard to say. Impacts, are transparency, data usage. These, some of them have got little parentheses at the end, privacy, which is protecting personal data security, impersonation concerns. Sustainability, the environmental side of sustainability, balancing interests, and that really refers to the fairness to the content creators and the consumers.

And I’m sure that if you ponder each of those for just a moment, you can see in a real world example of where AI might be able to abuse. Any of those individual bits, obviously sustainability. If we all rely on AI for everything, we’re gonna be burning through carbon at an incredibly quick rate.

Balancing interests do content creators, who, owns what if the LLMs are allowed to sock up all of the data without giving credit back, and, they appear to be the origin. Of that particular piece of content, whereas in fact it was written by somebody, I don’t know, 10 years ago or what have you, and each of them, just ponder it for a moment and you’ll be able to figure out how that might be of concern.


[00:11:47] David Waumsley: Yeah. And then I think the next, because it’s sectioned up and the next thing is really addressing those ’cause it’s the technical impact and standardization proposal. So obviously what the W three C as a standards organization for the web, it likes to ultimately come up with some standards, which we all agree to.

So it’s really addressing all of the next ones with, a sort of consent mechanism for AI content. In AI training development of personal data stores for data control, labeling of AI generated content to combat misinformation exposure, sorry, exposing model. Backed web IPAs while maintaining interoperability and evaluating the environmental impact of AI systems on the web.

[00:12:34] Nathan Wrigley: Can I just say the, I do like the idea of a web, IPA, which is what you’ve actually said. A, web-based Indian pale ale. That would be, really nice. I think you meant to say API, but I prefer I did. I prefer what you said. Can I just say though, just getting back before you put this document together?

Yeah. I didn’t really know anything about this at all, and I honestly didn’t think this was any concern. I. Of the W three CI, I couldn’t in my head, have imagine, why do they, wanna get involved with this? and then of course when you put this together and I started thinking about it, really, somebody’s got to be thinking about it.

And that’s really where it comes down because it has to be some sort of more global approach, an approach where big business doesn’t get its tendrils where government doesn’t necessarily get its tendrils. And All of a sudden the penny dropped. And I thought, actually, yeah, this is brilliant. We do need people who are detached from it all.

they have no economic motive, no political motive, whatever it may be. it’s brilliant that they’re thinking about this and, they seem like fairly sensible defaults starting positions. Yeah. yeah. Anyway, sorry. You carry on.

[00:13:50] David Waumsley: No, I, had the same thought as well. It’s trying to understand what the W three C has, and I think it does have a unique power when we’ve talked about before with, the working groups, particularly with CSS, how, within those working groups, it’s bringing to the table.

Big tech. It’s bringing Microsoft, apple, Google, Adobe to agree what they think the web should look like, these standards. So it’s obviously the best body given that AI pretty much appears through the web. What other organization could get everybody around the table? And you can see we were talking about this a little bit earlier about.

How there might be a benefit to say Google because its main business is through its advertising, through its search. particularly that’s its main business. One of its biggest worries and one of the things they’re tackling here is how do we mark up on the web stuff so people know it’s AI generated.

And of course, Google want that more than anything for their own business, as well as being on the side of. Supplying AI as well. They also want to control that for that aspect of their business. So there’s a good reason for everybody to try and agree things, I think, to some standards. So where is expected?

VER developers, who when we’re putting sites together that we declare our. AI content, you know?

[00:15:11] Nathan Wrigley: Yeah. I think it’s gonna be really difficult for the W three C to have teeth in this, in that it’ll be easier for them to just create some recommendations, which we, we would hope people would follow.

But maybe this will be one of those situations. So for example, at the moment. It does appear that accessibility. So nothing to do with AI for a moment, just accessibility is gaining teeth because politicians around the world are, putting laws together, which say, look, unless you do this, you are gonna feel the, strong arm of the law.

if your website is not accessible in these certain ways. And here’s the guide, here’s the guidance for how that should look. but now we’re gonna give it teeth. And that’s coming around in next year in the eu. Maybe this sort of stuff over time will need that approach as well, because you can imagine it’ll be easy to just ignore any of those points and just say, yeah, nice idea.

But my country, the jurisdiction that I live in, doesn’t compel me to do it, so I’m not gonna do it. Yeah, I’m gonna steal that content and claim it as my own. Who cares? so we’ll see.

[00:16:18] David Waumsley: Yeah, the idea of this report is that it does, it is the first stage in it, so right at the moment it’s out there as a, partly written document where people can add in their views and things.

That should be, and there’s a couple of ’em will come onto that minute, so this, but then they’ll move on to stakeholder interviews, which I guess we’ll be talking to the likes of Google. Yeah. Microsoft and what they’re doing on that, and then it’ll move into the. W three C workshops in the way that they might go about standardizing anything like CSS.

Yeah. Yeah. But I think, surely it should be welcomed by, when you think that you might have Microsoft competing with Google over, I’m sure everybody wants this to be a success in, The end. So if everybody can agree some standards of which they will work to on the web, then it’s to the benefit of everyone.

So I think that’s where they are unique. But yeah, it does seem a little odd to me. when, ’cause most of our news about AI is not really, you think of it as a separate endeavor and one which potentially. And that’s another argument of it could remove the need for the web entirely,

[00:17:28] Nathan Wrigley: Yeah. You’re imagining a scenario where you’ve just got this little pocket assistant or something, and a lot of the things that you’re doing on the web, you’ve now consigned to this ai, which is sitting in a device in your pocket or something like that. And you don’t need to browse webpages because I don’t know, you just.

Ask it a question and it provides you with an answer. Yeah. It undermines what the internet has become. Yeah. This is, honestly, this is so interesting. I with those ethical and societal impacts, I’m, I can’t think of anything that they’ve missed, but I’m sure that you’re right. As the days, weeks, and months go on, I’m sure there’ll be other things thrown in there, but at the moment we do seem to be in a cycle of hype, and I’m seeing it more like that as a cycle a.

Fulfilling prophecy, if you like, where one AI company promises a thing. And so the other AI companies have to over promise because of that promise. And then, the original one then has to over promise. And we do seem to be in an ear at the moment and you only have to watch the tech conferences, Google’s io and things like that.

And they portray AI being this entirely benign. Perfect force where you, are, you show it a thing and it tells you what it is, and you ask it a question and it gives you the right answer. And, it does really seem the hype cycle that we might have had for things like crypto or NFTs a little while ago, and, maybe give it a year.

That cycle will just, people will realize. You know what? It can’t do everything for me, but

[00:19:01] David Waumsley: yeah, I mean there’s certainly, that’s cynical. We’ve been watching some videos that have come out, by, business people who will just say this whole AI thing is exactly like that.

It’s another thing. It was big data before that. It’s been, wearables, it’s been Bitcoin, it’s been, and this is just another, Raising funds. Silicon Valley hype hoax if you like. and certainly I think people have been caught into, as we know with Google, it cheated, and got found out for showing some of its demonstrations of what its AI can do where, it’s supposed to be able to identify certain shapes and tell you exactly where it was, where it needed human intervention.

And we had a big thing with Amazon as well, where they claim it was a success, but it needed all of these. operatives in India creating the, it was a checkout for their store where they didn’t need to go and purchase anything but cameras with, real people were doing it. And now they claim the a l was a success, but they still closed it down and they still removed a lot of their documentation before.

So I think there’s a certain element where I’m sure none of these companies wanting to get into this kind of, almost having to get into lives if you like to compete. Yeah. Somebody starts it with an exaggeration, somebody else has to beat it because we’re in competition and they get stuck in it, really.

[00:20:22] Nathan Wrigley: I think one could, one could gently call it overpromising,

[00:20:26] David Waumsley: overpromising is probably a better thing. Yeah. But I don’t think it’s, it’s probably not where any of these companies want to be. It’s just the way, that’s just the way I. Capitalism works, isn’t it?

So everybody, you have to try and get the best competitive advantage. So I would imagine, the WCC has that ability in the past to agree, and I think we’ve only just seen that with a web where everybody agrees. Yes, the web, we want a standard set of code and we all want to agree what that one is, and we all want to make the web work on everything.

So I, I can see how it can be a good place. For AI to be there. Yeah. But there is an argument, sorry, I wanted to just say something on that is I think it’s alluded to in somebody’s GitHub, addition to this, and I think it was that you could say it’s a threat to the web itself in the sense that, and I’ve seen other people talk about this, so if you say, for example, PDFs, they’re very difficult because they’re not web right to make accessible and you have to put all this embedded stuff in.

What they’re saying these days is that. large language machines can read that stuff and work out how the content is. So that work might not need to be done in a way. And we see that a little bit, I think with a lot of SEOs saying about Google, how when we were in the age of the big data mark, everything up with schema.org or rich snippets that they called, it was the way to go.

Let’s break it up. But a lot of SEOs are saying, and it’s true of Google. They seem to be dropping the support for a lot of these things. Partly because it might be over abused, but I think it’s partly because they don’t need it so much because the machines can work out the content areas of sites based on Oh, I see.

Their knowledge. Okay. So, yeah.

Some, yeah, some, things which, so you can get into that argument where you’re saying, a lot of the fundamentals of the webs, which we agreed to, do we need them in the same way? Do we need, old tags any longer? if they’re being read in a different way with AI and.

do we need aria labels and stuff? If screen readers can detect themselves, with new technology, what a page markup is, yeah, it’s quite interesting. There is a, potential for some of the web to be undermined. with that.

[00:22:45] Nathan Wrigley: I was gonna say something there, but I’ll wait because it is what we’re gonna come onto, but it was about the capacity of AI to actually.

Get all of that stuff right. the dom and what have you. So shall I just scroll up a little bit more and, We’ll go onto the next little bit. okay. So we’re onto Jacob Nielsen, did you wanna mention him?

[00:23:09] David Waumsley: Yeah, was just somebody else made the point that, in that report isn’t talking about accessibility and Jacob Nielsen, the king of usability is called Think he wrote an article, and I’ve linked to it there, that usability has failed. People don’t use their RRE labels correctly, don’t mark their HTML up in the way that screen readers can easily read them. And that generative AI could start to do that job, which is really the point I’m making.

[00:23:36] Nathan Wrigley: Yeah. Yeah, that was the point I was gonna make as well. And, I think, that kind of makes sense. Wouldn’t this be an area where AI would be. Very good because it’s got a very defined outcome, Ha. Has it been marked up correctly for a screen reader, ignoring the fact that in the future screen readers might operate in a different way for the screen readers that we’ve got today?

Would AI be potentially quite good at this? Because it could see, I don’t know. This is a paragraph, this is a heading that is as heading, nested within another heading. Let’s give that an H two and that an H three and whatever else we might add to the HTML, the markup. It feels like that would be an area where you could just maybe click a button and it would do a.

A better job than you not doing it at all, if

[00:24:22] David Waumsley: Yeah, and I think that’s, I, see Jacob Nielson’s point and I see he’s had some pushback by a lot of people, disabled people actually. and I understand that pushback, and I think this is where, I don’t think that’s the right thinking because I think you do need people to markup because you need another human.

Marking this up to make sense to human. admittedly, it’s better than nothing, but I don’t think it’ll get rid of the idea that marking up. documents with HTML properly, with a concentration on accessibility. And I think in some ways it’s, skipping the point as you raised the fact that next year we’ve got a European law, which is requiring many of us to do our websites in a more accessible way.

So yes, it’s right in the fact that it’s failed, but then there’s been nobody policing this. And I think we’re seeing more policing of it. And I think human beings marking up documents rather than letting. AI do it is better because they will mark it up with the context of how it’s expected to be explained in the same way that I said about alt text as well.

there’s no point in describing the whole flipping thing. It’s really how that image relates. does it relate at all? If not, it needs a blank and don’t read it out to people who don’t need to know it. Yeah. it’s just decorative and I don’t think AI can then distinguish that kind of stuff.

In situ,

[00:25:43] Nathan Wrigley: so it, the AI might be better than doing nothing at all. But it’s probably not gonna be as good as doing something with humans, if I think that’s a sensible default position. Yeah. if your organization is gonna do something, it would appear that just clicking a button and praying.

That the AI didn’t hallucinate its way into just nonsense is probably not gonna get you out of legal woes in the near future.

Yeah. Okay. Okay. And then we’ve got Tim burners Lee.

[00:26:15] David Waumsley: Yeah. So he’s been, because he’s been the 35th anniversary of the web. Yeah. Gosh. I know it’s getting old. yeah. so yeah, he’s made a, it’s, there’s a couple of things and I’ve linked to them on the bottom of this and maybe I’ll add some more links in ’cause we referenced some things as well.

But, yeah, so he’s done an open letter. About the web, which talks about that. But also there’s another article which is, by CNBC, where they’re going to some of his predictions and included in that is, is AI or as I’m seeing on my notes, it says A one. Yeah, A road in the uk there’s gonna be.

yeah. So the big takeaways from this I’ve added to that was the fact that he thinks it’s gonna transform the way we interact with the web. And Google interestingly have been like that as well recently saying, let you know, let Google do the Googling for you. I. This is new take on what we expect the deal to be with Google.

But yeah, he’s saying that’ll transform how we inter interact and he sees I’m not so optimistic as him that we’ll all have AI assistance do work for, similar to how our doctors, lawyers, or bankers do. So. he’s very optimistic, isn’t he? About AI more so than me and you.

[00:27:38] Nathan Wrigley: What, do, you know what, that almost to me sounds like a bit of a dystopia.

there’s a film that I watch and it’s got nothing to do with this, but there was a film that I watched recently where, and it was a dystopian future film, but there was a bit where, the, protagonist, the character in the film went up and talked to a robot to get his prescription. And, there was an actual robot, which, it had a flat lights flashing on the mouth, but no other moving parts.

And, it was just this, horrific conversation of, I’m owed this thing, can I have it? we don’t seem to have this for, really robotic voice. Anyway, back and forth, and just nothing but frustration. And I, that concerns me, that piece because that seems to undermine the humanity of a lot of things and that bit.

That be okay. Maybe that’s where we’ll go. But it, I fear that’s where we’re gonna go. It, that doesn’t seem like a, good incentive for humanity particularly, but

[00:28:39] David Waumsley: No, a lot of, services we have is because of that personal connection with people, it. Almost acting like counseling as well.

And a lot of people grieve over the loss of that human connection, their trans actions. So yeah, I think he’s a little bit more optimistic, than I would be. He does also, and it’s related, it’s not under his AI predictions, but he is, predicting as he wrote in his open ledger as well about breakup of big tech as well.

He sees that happening and I. this is the problem, isn’t it? We are in an advanced capitalist world and we are seeing that, money is going to, concentrated in certain areas. We have big tech, they have an awful lot of power, and we are seeing, To getting to the point where we have to see the US government will be taking a lawsuit out against Apple to make sure this competition there.

We’ve got something new that’s just passed, doesn’t it, for the EU as well, which gives more control over big tech so it doesn’t abuse its position. So I do think there may be a need to Yeah, you, yeah.

[00:29:44] Nathan Wrigley: And also interestingly, it is largely. Big tech. Which is pushing this agenda of what AI can do. there, there’s no like tin pot, little cottage industry mom and pop AI out there, which is, a good rival to something like open AI or Gemini or whatever it is.

The big tech who are, and we mentioned it a moment ago, this self-fulfilling prophecy where they’ve got to overpromise because the other one’s overpromising and it’s the big tech doing it, and they’ve got a really big stage. And honestly, they could fill those, they could fill those presentations with.

Utter lies. And I think at the moment we just lap it all up because, oh, look, it can do that. That’s exciting. yeah, so I, yeah, I, think you’re right. It does appear like the legislators around the world are, have got the, sword of Damocles definitely above the heads of these big tech companies.

So we’ll see how that plays out.

[00:30:36] David Waumsley: Yes. Governments having to take a part in this just to see that. Yeah. They don’t have too much power. And, we were talking about, how interesting it is with. Gen Z seeing things very different. Being a kind of first generation that doesn’t have job security and housing insecurity, that they are slightly cynical where, big companies where money’s going on that.

So they change their behavior with big tech. They see in a way what’s going on, where, who has all the power with that. So I think that’s changing things on the ground. I do think, with Tim Burnley and he always sticks to this ’cause it’s partly as well, I think a business that is running is big on everything where this has to be set in the context of the fact that is ideal Web is one where the users own their own data.

[00:31:22] Nathan Wrigley: Yeah. Yeah. I’m a. A big proponent of that. But it’d be interesting because it would appear that some of the big tech companies, particularly Apple, it would appear that they’re moving in that direction. In that they’re it feels that although they have just. Been in chat with open AI apparently, so we’ll see if this breaks out.

But, is that the, AI will be done on your device, these M four chips that are coming down the pipe will be really good at doing everything on your own device. So as whereas at the moment you’re used to going to open AI or Gemini, and that’s an online. Portal, you type your things in and whether you’re using the API or the web interface, you get a result through the internet.

It, feels like the future is gonna be doing that on your own device. So you own it and it never goes anywhere else. So maybe, Apple will justify not being broken up on that ground. I don’t know.

[00:32:17] David Waumsley: Yeah, you mentioned something about there’s no, mom and pop, AI business there. It, we both watched a video, which we liked in a way, but, it was, it was on YouTube and it’s guy called Mike Pound and it’s on, it’s channel.

I’ll put the link in ’cause I haven’t got it there. Okay. computer file. And this was the one, if you remember, Nathan, where he was looking at some of the scientific report. And we’re, always talking about the massive advance. Yeah. In AI and what it can do within a whole year, but all their reports or what he was looking at, we have to trust him that his reports are there, but he seemed to be sound man talking, saying that progress will slow down because the costs of getting the quality, because it’s so easy to gather a load of data and that not to be that great, which it means everything’s always looking back, but like I say, it’s rubbish with.

CSS because there’s so little data about modern CSS and everything’s old. So to correct it you need so much more data to keep throwing at it, that the costs become too much, in terms of the businesses too advance, it becomes too much. To invest in it, to be able to get the improvements back. And then that’s the other debate, which we’ve heard a lot.

Yeah. I, you mentioned it on, your new show, about the environmental cost of AI with all of that,

[00:33:36] Nathan Wrigley: Oh boy, that’s just terrific. Yeah, I think, if you think. Googling things is bad if you, AI things, if you request an answer from an ai, you really are burning through the carbon much more quickly than just going onto a variety of websites and finding it out for yourself.

Obviously, it’s a lot quicker, but every time you create an AI image or an AI song, or do an AI search. My understanding is that, from journalists that I think have credibility that it’s, burning through CO2, a really alarming rate and something which really shouldn’t be sustainable. so yeah.

Yeah. That’s interesting. I.

[00:34:18] David Waumsley: Yeah, it does seem like there’s a little bit of a or a weariness now that this will actually pan out. like all the things, as mentioned before, the kind of Silicon Valley stuff, that’s the big thing that’s gonna make money. there’s a lot of other people as well saying, this potentially could be another.com.


[00:34:34] Nathan Wrigley: Yeah. if it doesn’t continue to get better at an exponential rate, we’ll quickly grow tired of it. ’cause that’s the human condition, isn’t it? Yeah. And the YouTube video, which you are gonna link to but isn’t on the screen at the moment, it ha I think you did a really credible job of explaining that.

And, rather than the graph going upwards, it, went up for a long time because the LLM suddenly enabled things, which were not possible. And so it looked like. It was being creative and it looked like it was human almost, and it looked like it was passing the Turing test, but his, argument was that, okay, you had to consume all the data to get to that point, and then to make it twice as good as it is now is not just about putting double the amount of data in you.

You’re like a thousand times the amount of data. To get a teeny tiny improvement. So you put another billion point data points in and you get N point, N 0.1 of a percent improvement. You gotta add another 10 billion. And where does it all come from? yeah, and if we end up filling it in with AI data.

Then, we’re in trouble. But it, made me more sanguine about it, it’s leveling off and potentially it won’t get any better. Obviously, we’re recording this in June, 2024 when the robots have taken over in August, 2025. we’ll have to revise this podcast,

[00:35:54] David Waumsley: I do think, but it takes us full circle with all the things we’re talking, which, I think when we look at the ethical and societal.

Impact and, the things that they’re trying to standardize. It touches on all of these things, the sort of data usage, what data you are using over this. if everybody’s agreed to certain things, we’ll have proper competition rather than, do you know what I mean? Yeah. There’ll be more honesty with it.

And, I think also, something that’s not being resolved yet is the. the content creators, it’s stealing all of their work and, I think some people have done some experiments with, because it’s such a small data set, if you get into something very niche and you ask ai, it, it will almost duplicate what sort of Yeah.

[00:36:37] Nathan Wrigley: More or less you are robbing quite literally the exact Yeah.

[00:36:41] David Waumsley: Yeah. Yeah, and some people have done some experiments where they’ve done it on something so niche or E even tested it by putting up wrong information on the web so the AI comes back with the wrong information that is on the web because it’s the only source, but it almost entirely rips off.

The whole article that serves it back. So I think, when it comes to standardizing some of this stuff, when it starts to look into, the fairness of to content creators in some standards, and it looks to, I. the kind of data usage and it looks to what’s sustainable as well. I really hope that AI do get round the table around this, kind of things to create standards because I, it’s not in their interest, I don’t think.

I think AI’s promising. I think it can help us to do a lot of jobs and we both use it and love. Some of the things it can do for us,

[00:37:32] Nathan Wrigley: but it’s interesting though, but it, okay. So all of these organizations are made up of humans and Yeah, it is in their interest to not do this in the short term because there, there’s gonna be billionaires made there next year, and then they can just swan off and do whatever else they want.

But for the long term interest, if open AI and Google want to have an a, an AI future. That spans decades then, there is an interest. But I think one of the things that I’m concerned about at the moment is the self-interest. Yeah. Of a few people at the top of these companies who are, making money, like they’re printing it and it is in their interest to just do the nefarious stuff, to over promise to all of that.

And, we, don’t seem to have a way of. Reigning those people in at the moment. That could be a very naive position. Maybe they’re full of, good intentions and I’m probably, I’m labeling them incorrectly, but it feels a bit like that sometimes.

[00:38:36] David Waumsley: No, of course. And it’s gonna happen. And I, for me, I think why I think this is good organization and I hope it does succeeded with that.

’cause I think, yeah. We talked about another success before when you think the web could have been lost to flash. Proprietary technology, Yeah. That could have easily, and a lot of people believed in it. It would be in the self-interest, but somehow it managed to preserve, and I think there’s never been a time where.

The biggest giants in tech I’ve got behind something like the W three C. So we see the security of this web project, and I don’t, I was saying about before, could it replace the web? I don’t think it will because it takes so long anyway for people to adopt new technology and it’s more effective the one that we’ve got and everybody is behind that.

yeah. They’re setting up of these. So I think it would be a good thing. And even if there is a lone wolf out there who, perhaps wants to exaggerate for their personal quick gain or something. ’cause that’s the danger, isn’t it? With everything. Somebody tells a lie to exaggerate something, they do really well out of that lie because people believe it and they get lots of money.

So the next, the only way you can compete it is with another lie and on it goes and everybody goes. Please. I wish we’d stop with this lying, and I think that’s right. Yeah.

[00:39:46] Nathan Wrigley: Oh, dear. I mean we’ve definitely, we’ve definitely solved that problem for everybody, haven’t we? Yeah, They’ve solved it.

Everybody. I,

[00:39:55] David Waumsley: I think it’s interesting anyway, and, just hardly going to get any attention, the fact that there is this document out there and I think, I think it’d be interesting to see how that develops and whether we get, yeah. AI standards coming from the W three C, I think that would be a fabulous thing if it could be done.

[00:40:13] Nathan Wrigley: Perfect. I’ve taken the show notes away. so that was episode 13. That was our thoughts on, AI and the W three C’s approach to it. the show notes as always, no script show slash the episode number in this case. 13. Go and check ’em out there and, you two will be able to see Tim Burn’s Lee’s predictions as to what will happen to the A one.

I doubt it. I’m sure David will go and change that in a hurry, but that was lovely. Thank you David. Thanks. I’ll speak to you soon.

[00:40:44] David Waumsley: Yeah, cheers. Bye.

Your Hosts

Nathan Wrigley

Nathan hosts WPBuilds and the WP Tavern podcasts. He lives in the UK.

David Waumsley

David started building websites in 2005. He's from the UK, but now lives in Asia.