Yoast SEO News – October 2023 Edition
Webinar transcript
The first item we have just came from Mark Gurman at Bloomberg. Apple is apparently considering building its search engine. I have often wondered why they have not yet done this, but I understand that there are reasons.
Have you heard anything about this? Not too much, actually, but I’m a Google and Android user. So as such, I probably even when it comes to tech read stuff that hasn’t to do with Apple. So, when we were bantering it around the office, this was the first I heard about it.
I was also surprised that they didn’t create a search engine. Is it because the monopoly is too big, or is it the right time to start pouncing on it, especially with AI coming in? Well, from what I understand, there are a couple of reasons why Apple hasn’t done this yet.
Number one, Google is doing excellent work, and there’s no reason to rebuild the wheel. The other thing is financial. Google is paying Apple a lot of money to keep Google accessible and default on Apple devices. Eight billion dollars, how much I think the next slide shows how much money they’ve made. Does between 18 to 20 billion dollars a year remain on the iPhones that’s that’s not so the change. That’s that’s some serious money. And I would imagine, you know, it’s a big undertaking.
It’s a big investment to run your search engine. So, if you’re already making 20 billion dollars a year by simply including Google, why not continue to include Google? I think if there became a point where there was a technological reason or maybe a privacy reason, it would probably be the biggest thing.
If Google stopped sharing some of the data with Apple and Apple needed to get the data or wanted to use data that it collects within its ecosystem, that might change the math a little bit. But for right now, I’m not Apple, obviously, but I think I would have a hard time walking away from any number that ended in billion.
Whether they say is it what would happen if they didn’t pay us? What could we earn that? And if so, how could we earn it? Could we do it by searching through everything that they cooperate with Apple? And as well as that, there’s all the iPhone and all the devices, you know, even though I’m Google by phone, I’m on a Mac right now.
So, what would happen with my search behavior? And I know that maybe Bing tried to do that at specific points, and it didn’t work. But it’s a bit different, maybe with Apple, because I don’t think they create a search engine or a search experience similarly.
Maybe what they’re thinking is what this next chapter of searches like we’re all wondering what it would be. And they’re one of the innovators. So it sounded like there was something in the articles that mentioned the search generative experience.
So maybe now with A.I., maybe Apple can have more of a you don’t even have to go to the search engine. You can just stay right here on your phone and keep all your data and your information and your searches, more importantly. Within our clutches so that we don’t have to share these with Google. And that would accomplish a couple of things. It would keep the data and the user information within Apple’s closed environment. It would also deprive Google of revenue.
So this feels like a very tenuous frenemy kind of situation between between Google and Apple. And if it were to split, what would that do to Google’s market share? What would that do to Apple’s market share? We’d have probably a second search engine that we would have to optimize for.
I mean, that would that would be a pretty seismic shift in the landscape. It’s interesting that they may be thinking about it more seriously now that they’ve made the VR VR products as well. So that goes more of a discovery experience in general.
So that’s going to be interesting. But then do we as SEOs have to think about that differently for a different kind of search experience? But Google’s going that way anyway in the way that we may have to think and adapt to the technology surrounding us.
It will be another one to think about, like a new social platform that someone has to get used to, even though they don’t want to.
Well, just one last thing before we move on to the next topic. I think what’s nice about it, if you can say it’s nice, about Google having such a monopoly on the search market right now is that we really only have to optimize for Google.
We don’t have to worry about the other search engines, because even if we dominate in the results and the other search engines, they’re not going to drive the same level of traffic that Google’s driving.
So it’s it’s a single thing to focus on. And if we suddenly ended up with two almost equal traffic drivers, now you absolutely have to think about whether am I optimizing for Google or optimizing for Apple? And if I optimize for one of my damaging the other how is that going to affect the traffic inputs?
I hope they figure it out, but I’m just trying while you were talking, I was just thinking, well, what would they do with all of this stuff? What would they do with all of this search experience? You’re sending a message because they could put it in all of their products.
Then you think of shopping feeds and then thinking about Amazon and the way that their relationship with Apple may change because they may be able to get marketplace ads or placements or another form of data feed to power whatever that search engine is, which means that they may not require Google at that point. And the more that people are using their mobile devices means that essentially you could call Google a middleman at that point.
You know that they’re just there between what you’re thinking and what you want at the end. And obviously, they won’t stop that and Apple could facilitate that, which is really interesting. I think Apple users have a different, more intense relationship with their devices than Android users do.
We should probably move on. Let’s talk about fluctuations in Google Discover traffic. Barry Schwartz had an interesting article at Search Engine Roundtable.
Do you try to get a lot of Google Discover traffic? I’ve had some really good luck with it. Yeah, but fluctuations, right. And it’s always hard to control because it’s not scientific in that way that you can’t predict a pattern that something would happen.
It has to go by something that’s maybe even newsworthy, which you obviously can’t predict unless you’re a news publication and you need to work in that certain format, which you have more experience with.
But it’s interesting to see that fluctuation over time and sites that may have also got hit with HCU will also find related yet not directly related hits here where discovery shouldn’t be there for your site as well. So it’s not just about what you’re writing, but how you’re writing it to let, potential audience discover you.
But the fact that Google has been making this more specific means that it will have been abused in the past, right?
Yeah, and Discover traffic is one of those things that people do. Remember when DIG was big, like people, didn’t know they wanted that firehose of traffic that would come from being picked up or popping on DIG until they got it. And then it was addictive. So I know that people that have had a taste of that firehose really want to keep going back to it and anything that’s going to make it less predictable or less something that you can optimize for is going to be frustrating.
I think for the people that have been actively trying to game and to work the Discover traffic. It is interesting. I think that as the algorithm gets more sophisticated as well, it will know the kind of content and context and the ways in which things should be discovered.
So even if you’re saying your news, the context, even your personal context of why you might search for something may give you different results that might even be interesting in discovery in terms of your thoughts and opinions in a bad way. Information bias. Deeper into an echo chamber, which is, a problem of social media at the moment and the way that their algorithms work.
That was always one of the things I argued with news publishers about. They wanted to customize the news that they were giving to the people that were landing on their homepage, according to their likes and their interests. And part of reading the news and part of being open to new things is being exposed to viewpoints and information that you might not otherwise be inclined to look at.
I can definitely see a potential for it to fall into just a giant confirmation bias loop. I wonder if that means that there are people who want to have a preference of being in an echo chamber or people who don’t want to be and that that may change the way in which you discover new content.
People like to be comfortable and they’re comfortable with being reaffirmed and reassured that they’re right. So I think most people would fall into that I would rather be comforted than challenged. But yeah, it’s a big change and I think there’ll be another change to Discover soon that will maybe refine any of the experimentation data that they’ve seen from these updates as well.
We’ll have to keep an eye on that. I’m sure Barry will keep an eye on it. He’s very good about reporting on those things. Barry also reported in Search Engine Land about the impact of Google’s September 2023 Helpful Content Update.
I know a lot of people have gotten hit and it’s the subject of a lot of conversations. There’s not been too many people who’ve come out and said that HCU just happened and I couldn’t be happier. You don’t hear a lot of that discussion. It’s not one where there have been loads of pros based on domains and the only pros is that someone else got penalized, and you now take their place.
That’s the only way that someone’s one rather than their content has been, which might be confusing for some who may initially see an uptake. It might just mean that you’ve not done something bad to your site, but that also means what happened to the competitor.
And I might also think, maybe the competitor is going to wake up now because they’ve just noticed a big drop in traffic. So you’ve still got to keep whatever wits about you for the next update, which will happen, I reckon, in the next six months.
But it seems to be one of the widest updates that have affected the widest amount of sites for some time. I do wonder how they’re fine-tuning the algorithm that’s driving it and the learning model that the machines are using to make those determinations about prioritizing the human-centric content content over the search engine optimized content or the written for the bots exclusively content.
Ann Smarty posted something on X asking the Google people; why is this particular piece of content not ranking when it’s 100 percent for users? This is one person’s opinions and views, and it’s a very well-thought-out article and in-depth about this particular product. I think it was a gas grill. I don’t remember exactly this article over here, which is on a gas grill site.
If this is clearly done just for ranking and there’s no useful content here, this is not good. And the Googler came back and said, this article that you wrote, while it’s very good, there are no pages on your site at all that are optimized for gas grills. But you just said that you want us to write for people and you want to prioritize content that’s helpful and useful.
And now you’re telling me that because it’s not also optimized for the search engines and that I didn’t set up a page on my site just for gas grills. I’m not going to outrank this clearly built just for the rankings page that is. I just kind of scratched my head at that one.
Like how admit to me that you don’t know what you’re doing without telling me that you don’t know what you’re doing. Kind of like the way that came to the point. Where is that middle section? No one can decide that as well.
And it’s also interesting how people have reacted since because there’s been lots of content pruning, whether it’s been a page at 301 somewhere, whether they’ve ended up 410’ing it or just letting it 404. That must be happening in big ways, you know, in the last four weeks.
And even recently I saw on X Search Liaison mention, well, old content isn’t necessarily unhelpful content. So, which is true. But now I’m thinking, are they saying that because they’re now seeing a hell of a lot of old data that’s just getting, noindexed for whatever reason or redirected elsewhere. And that isn’t what people should necessarily be doing.
But when you’ve got sites with tens of thousands of pages, how are you going to sift through that manually? It’s going to be nearly impossible to do that. Well, not impossible, but it’s just going to take too long, isn’t it?
I mean, I would argue that if you had been writing good content all along, your content would not be unhelpful to start with. But there are so many websites and so many business models that are based around I’m just going to spin up a whole bunch of content intended to rank and then sell my stuff or get those eyeballs.
Yeah, there’s a lot of garbage out there that realistically should have never existed in the first place. And I said last month, this reminds me of the original Penguin and Panda updates, and in the same way that people kind of knew before the algorithm update happened, they knew if they were doing really bad black hat link building that maybe afterward they kind of quietly admitted.
So maybe I did go against the actual ethics of the algorithm, and I should deserve this karma, right? So maybe this is today’s version of black hat content production karma happening for some site producers who always thought that their sites ranked well, but that’s what they said, are these ranking? Well, yeah, but is it helpful to the user? Is it useful?
There was another there’s one more X post that I wanted to mention it was an analysis by a Googler of a recipe page. And they said that the recipe page wasn’t helpful because if you read it, there are a lot of paragraphs that repeat and a lot of information that repeats and there’s this tome of information, this backstory before you get to the actual recipe. And this isn’t really helpful to the user, to which I’m thinking, this is literally what your algorithm, your algorithm Google, I’m speaking to you, has been training the recipe makers to do for years.
And no one has liked it. The recipe makers don’t like it, and the people using the recipes don’t like it. We don’t like having to read an 8000-page backstory of where you met your mother and that’s how you got the recipe for her chocolate chip cookies.
Helpful content and then Google also released a spam update. So, in addition to trying to weed out the unhelpful content that we are not 100% sure they can differentiate. They’re also looking to improve their spam detection and removal, hopefully across multiple languages but we know that that’s not always balanced.
Targeting cloaking hacked sites. Okay, so targeting hacked sites is interesting to me because I see a lot of hacked sites, especially hacked WordPress sites. People don’t even realize they’ve been hacked because some of these exploits really do hide and unless you know what you’re looking for you won’t discover it. So all of a sudden finding that you’ve been completely removed from the index would be a jarring wake-up call. Have you heard anything else about this?
Not specifically with hacked sites but it’s a problem because they’ll obviously know over time. The way that those hacked sites interact will become a pattern over time which is something you would hope that they can then say well, maybe they’re a hacked site but they’ll still have to check those sites and the legit content doesn’t get through so they are still getting penalized for being a victim of it.
And it’s hard because they usually come in through third-party plugins or bad copies of themes or any other kind of exploit, the old version of WordPress that’s now exploited. The onus is on the site, on the site owner so much now too, and how are they going to know the ins and outs of all the code of every third-party plugin that no site owner knows that you know unless they’ve got someone real experienced developer who knows their way around that specific CMS. But from a search engine’s point of view, they always know it will be there.
It’s like web crime, in a way, and pollution that they have to get rid of. Well, the moral of that story is don’t go installing every plugin that you see that, you know shines brightly and has flashy lights because you don’t know where it came from.
You don’t have to be judicious about the plugins that you have to keep your WordPress instance up to date, you have to keep your plugins up to date and plugins that have been abandoned or not updated for ages. You might want to consider taking those off because no one’s maintaining them anymore. I mean, there is a lot to be said for the security of keeping your website up to date with plugins and security patches.
I also wonder though about the scraped content and the auto-generated content. I still don’t think Google’s great at detecting when something’s been auto-generated, and I definitely think the scraped content now and all of the AI plugins spinning existing content that you stole from someone else is going to be so much easier, and they cannot tell the difference. I know they say they can, but I don’t. I think their false positive rate and the false negative rate are still extremely high, too high to be useful in production.
So they have a long road to go when it comes to getting spam under control. It’s probably a losing battle, but I know it’s a battle they have to fight. Well, if it’s a losing battle, then if they do, in fact, lose it, then that’s going to affect normal organic rankings if you want to call still whatever they’re going to be in 10-20 years if you want to think that long ahead.
And then that information is going to be harder to identify as technology gets better content is going to be harder to identify if it’s been written by a robot or a human, and then do you have to disregard how they have recently by just saying what’s good, but then what’s helpful, and all of these ways that and then how these people will be able to spin that content in automated ways will make it harder to identify what is good, what’s helpful, what’s not. How do you rank that there’s going to be, you have to have verification to publish.
You almost have to have a list or some white list of trusted verified sources that you’re getting information from that Google would be getting information from to weed out the websites that are not writing their data and not doing their own research. It’s complicated, I mean, that could be a webinar in itself.
Yeah, using X as an example, the main platforms that are using verification, so that you can now fake information or disinformation or bots make it harder for abuse in general, which is interesting because even though people did moan and groan about it when it first happened, it does seem to actually be a viable business model. It also does seem to be a way of verifying, even though there are ways through that as well.
Do you have to be a verified person somehow? Well, actually this site’s connected to this credit card this credit card does connect to this person by, and actually does that make schema all the more important in the future?
I imagine schema and some kind of verification that’s where this is going, I think. This is interesting because then people have to make business models on making these verified people, which will make these going to make policing easier for the search engines but the fact that we’ll have to do that.
You can’t hide you can’t hide anymore. Do you remember that old meme that nobody on the internet knows knows you’re a goldfish? You’ve been around for ages. Well, but now you’re not going to be able to be a goldfish. Everyone’s going to know who you are and be able to find out what you, you know what you do as a person because otherwise, it’s just too easy to lie.
What are other updates? So, there was a core update. It started on the fifth of October. Okay, and did end on the 19th. But there were a lot of updates, an August core update, HCU in September, was the spam update we’ve just talked about, which started only 24 to 48 hours earlier.
That was a busy month, and there were a lot of things and when you get multiple algorithm updates, it makes it even harder to decipher which update may have had a problem because, especially with the HCU and some of the core there was a little bit of overlap. I know with HCU, some got hit at the beginning, and some got hit at the end. Just as another update was happening.
It’s very interesting to see how some reacted and again, it’s harder actually to research them. I know Sistrix does an excellent job of doing winners and losers now and again from these updates. It’s kind of like an earthquake with aftershocks. You don’t know which one will make the house fall.
I mean, how much did you see a lot from this specific core update? I haven’t, not on the sites that I run. So, I was far enough away from the epicenter that I didn’t feel it. But I think we’re still in it. I’m still researching the original. If there was additional damage done by any of the secondaries, it wasn’t significant enough to make it on my radar. Yeah, well at least it’s done now so whatever may have happened to anyone’s sites is all.
We think it’s done but they’ve done stuff after Black Friday before. It’s been years since they did it, but Google doesn’t always respect the sanctity of the Christmas stretch between Thanksgiving and New Year’s.
I think people just prepare for it. Isn’t it around the 19th of December? It’s not quite December but just as everyone’s starting to, especially in some agencies where you do start to go, right, we’re going to get a bit quieter second half of December unless we’ve got retail clients. And then something happens and it’s unfair.
Do you want to have that on the second of January? Would you want to, what would you say if they were listening to us right now? What would you say, like the 16th of January, maybe just everyone gets back? Yeah, let everyone come back, all of your holiday parties are over. You know, the only holiday you’ve got to really sweat right now is Groundhog Day and Valentine’s Day.
So, let’s do it then. Then it’s not a problem. During the holidays though you know my mom is not going to understand if I have to leave Christmas dinner early to go deal with a core update that just that just destroyed everything. No, they’ll never care. They’ll never care.
But that was all the main core updates, but at least they’ve also done some AI updates. We’ve got this new robots directive that Google rolled out this was again reported by Barry Schwartz who is an invaluable resource to the community.
The interesting thing about the Google-Extended directive is that it doesn’t really block as much as perhaps the ChatGPT block will block, and the Bing AI blocker does. It’s just not. It feels almost performative, I guess is what I’m saying, like they have to do something like we did it because everyone else is doing it but it doesn’t matter.
Everyone else is doing it, but it doesn’t do anything. It’s like the button at the crosswalk. You can push it. Oh, do you think it’s a placebo? I think it’s the opposite. There is one near my office that you press and there’s no way you’re waiting less than 45 seconds, which is waiting to cross the road time is like five hours, isn’t it?
At least where I’m from, I’ve never seen a crosswalk button that did anything. But, you know, with these robots, there were disclaimers and language in the article that, to me, if I’m going to dissect it as though it were being told to me by Bill Clinton, I would question the efficacy of the block.
You could do this but what’s it really going to do? It is very annoying. You know that if they change something later, it doesn’t matter because whatever they’ll have collected in the meantime is probably worth their while.
And it’s going to be a bit annoying. And also you’ve got things like ChatGPT, they’re going up to being in real-time now, instead of being 18 months old in terms of information. You’re going to have to be even faster to not index something you don’t want indexed though. It’s a consideration before you hit publish on anything.
If you are at that point of running in a site where it’s selectively allowed or not. But it’s interesting to see the moment it’s just in robots. What it’s saying is that it’s not blocking. It’s not blocking anything from crawling the site and getting that information. They say they will take your direction and not use it in certain projects.
They still have the information and we’re just trusting them not to use it in a way that we don’t want them to use it. We don’t know how they want to use it and they don’t know how they want to use it. There’s a lot of blind trust being asked.
It is weird and it seems like it’s an opt-out world. Even then what are you opting out of if you want to get seen in terms of visibility? You don’t want it to become a point you need AI to consider these things, or you’re not coming in.
Take the Search Generative Experience. You don’t want to be excluded from a result produced by Bard or produced by the SGE because you said, oh hey, you can’t use my data for that, even though Google knows your sites and knows that your site has the right information.
Your information isn’t being factored into that answer, because you’ve said no I’d rather you didn’t. Well, then, it’s not organic right at that point. If you talk about the definition of the word organic, not the way in which we use it every day like it’s that that would be that’s unnatural. At that point, it would be interesting to see how they deal with that.
It’s all very complicated. I think this might be a data experiment for them. Let’s see how many people add this directive. See what happens, see how it affects the way in which it works on their own products, and then fine-tune it later. But at least there are options here for everyone.
Yeah, let’s keep going so we don’t run out of time. The new recipe search interface. My understanding is that this is going to be image-first kind of search, which is what this looks like.
Really, it just means you have to be much more careful about which image you’re putting forward as the first image in your recipe, which makes sense because you want people to see the finished product. I haven’t seen this particular interface yet, but I do see a lot of images and I do tend to click on the image in the carousel.
And images when it comes to food are really important because they can put you up reading the whole thing you know. But it’s also a good way to know that it easily uses all of the schema. This makes it even more important when it comes to recipe schema to make sure that all those attributes are there, especially images.
But yeah, I’m interested to see how they use it and because, obviously people are searching in that way. But yeah, how does that affect what you said before about how recipe producers are being told to write?
I feel like the recipe schema mitigates the damage that the eight thousand word essay on how you met your boyfriend above your recipe would push the recipe data so far down that it’s difficult for the engines to grab and repurpose and repackage, which is essentially what they’re doing here.
I think that recipe schema is very valuable in taking the details and the important parts of the recipe, shoving that all up into the head of the document, and making sure that the browser or the crawlers can get that instantly and not have to put on their hip waiters and slog through your essay to get that recipe data that they want.
I still question how they’re making how they’re going to justify in their heads, changing all of the guidance they’ve been giving to the recipe people about those stupid essays. I wish the essays would die. I don’t like them. I don’t read them. I skim through them anyway because I just want the recipe.
So it would be great if we could get rid of that. I don’t know if these changes are going to necessarily cause the demise of the eight-thousand-word essay. Yeah, it’s interesting. I see someone mention that they look at the ratings regarding recipes, which is interesting, right?
Because I think of myself as a user, I don’t know who did those reviews because something like that is very subjective. But I should say I know my result won’t look like that. So it doesn’t matter, which I guess is also a really good point.
If you have a sexy enough photo, and the one thing that can be made sexy in imagery is food. That’s not to say that it’ll look like that at the end. We’ve all been fooled by McDonald’s advertising. There are things you can do to dress up some of your items and there are things you can do. But we’ll save that for the Q&A because I’m getting the panicked look from the wings.
The SGE can now generate images and write drafts, which means we will all be replaced by robots tomorrow. I haven’t tried it myself and I can see how Gophers making a barbecue in the woods will be a great evolution.
Well, how useful is this to the future of how we do things? Is it useful or is it a nice thing to have that we experiment with? I think it’s a nice thing to have that we experiment with generating images.
If the only images you ever use are just randomly illustrative of your topic and don’t convey any additional information, I wish we would stop including them. I know we’re a very image-heavy organism where we like that visual stuff, but it slows the page down, not necessarily all the time, but it can. And it’s not helpful to tell the story.
If your article is meant to impart information about a product, I want to see the product. If it’s meant to tell me how to put the product together, I want to see pictures that tell me how to put the product together. If it’s a news article, I want to see a picture of the person, the people, or the news event. I don’t want to see a computer-generated picture of a capybara making bacon. Exactly. Let’s see what they do with it.
What’s next thing? Search results, search results. Google’s not going to be doing the indented results anymore.
No, I find them quite helpful, but I’d like to see where they go with it and how they’ve gotten rid of FAQ. You know, let’s see what they do. But again, guys, that’s still not to say that this isn’t some play to say that you should do anything differently in your day-to-day SEO life.
It’s interesting to know, but I don’t think it will affect our actions. No, no. But if you’ve lost them, don’t worry. It’s not you. It’s them.
Google tests a news-filled home page. Yeah, I don’t look at it. I don’t know. I read a few because I know it knows my habits. But to me, it’s very repetitive. I can read only so much about Rick and Morty next week.
I know that it’s like tracking me all over the place. And because some of the Reddit groups I read, it shows me a lot of information about things I wish I would not be reminded that I read stuff like that.
Yeah, maybe they’re trying to be more like Yahoo. But Yahoo serves a different audience. And should Google be like Yahoo? Do we need another Yahoo, or should Google just be Google?
I know what you mean. Let’s see what they do with it. I don’t know. That’s what Google News is for if they want to get the news right. I could see that the sites you get mentioned there will probably see a significant bump in traffic. But the joy and the excitement over that bump in traffic will lessen over time. There will be initial joy and then there will be. So it’s not a long-term thing. It’s fun now.
Google Rich Results is now supporting pay-walled content. I don’t have a problem with it, but, I’m not too fond of paywalls. So, are they enticing the paywall publishers? We will let you crawl certain stuff so it entices it, therefore gets in the index, makes you go to it, which again is a bit.
It’s a weird move for them to do, but it’s nice that they’ve at least split it out, right? I imagine there are a lot of newspapers and media companies that had a hand in lobbying for this change. I think it could be a polarizing move. I think it will remind people how much they hate paywalls and we’ll have to see what happens there.
But this is nice for the media companies that have paywall content, but I don’t think it’s going to be enough to save their business models because their business models need to be revamped. This is a sticking plaster on a jugular slice. Yeah, the bigger they are, the bigger the rebellion against it.
I won’t mention the names of sites, but there are sites where you just enter URLs and it removes that paywall in different ways, which will keep happening. And those things will get more sophisticated and harder to stop. But again, it’s good to see that at least it can be split in other structured data ways.
They’ve now gotten rid of the event rich results, much like the FAQs and how-to’s. Yeah, no more event snippets. It’s probably good, because I’ve seen that abuse to a lot of these things that are getting removed. I saw them abused so hard and it was just this just proves to me that we can’t have nice things because people abuse them until they get taken away.
Yeah, and it’s happened over and over again. You’re given something and then it is taken away. But it’s hard to know which ones aren’t going to be abused and, if they are, how often because I found one to take So, people will ride the wave as long as they’re making money. Yeah, I did find it useful when they first came out.
But then you just saw more and more spam, essentially. Yeah, that’s not an event. Why are they doing that? Like an event for a sale. No one cares.
So let’s get into the AI stuff. So, optimizing SEO for AI search engines. Roger Montti wrote this. This was in Search Engine Journal earlier this month. Obviously, this is a thing that everyone will have to start thinking about.
I feel like most people have been if you’re doing good SEO and you’re doing good development, you’ve been doing this all along and you don’t have to adjust what you’re doing too much. Have you seen anything that would require additional modifications?
No, only how you have to be more considerate to the fact that things have to be more optimized to be human-like it’s going back not back to basics. But I remember at the very beginning of doing SEO and trying to train other people of writing SEO. I tell them to briefly forget that there is a search engine in the equation and write for the audience.
And it hasn’t changed in the same way that all guidance and requirements are all the same. But it’s been formalized and evolved to be more specific, like EEAT? It was just a few webmaster guidelines, probably a few paragraphs. And now there’s a whole portal around it.
I think that the clear HTML tagging is an important thing to not for SEOs to remember necessarily but for SEOs to remind developers about. Because I see a lot of poor choices or questionable choices in terms of HTML tagging that aren’t visible to the user and don’t change the way the user sees the information.
It’s probably just easier for the developer to do it. But it does hurt how Google or keeps saying just Google, how the crawlers understand the presentation of the data, because the presentation of the data and the information creates a hierarchy and it paints a picture.
And if you use the wrong color paint. Okay, that’s also a bad choice. You know what I mean? I’m struggling with my analogies. I’m running out.
Point is, that good HTML tagging, good HTML structure, and validating structure has always been and will continue to be very important for everyone, which goes back to schema and any kind of structured data.
So, it always comes back to that original knowledge graph and fundamentals. You have to have solid fundamentals otherwise, you’re frosting a styrofoam cake.
So transforming search with AI, RAG is the future. I almost said rag, but it’s retrieval augmented generation. It’s a technique that’s refining responses by collecting relevant data.
Michael King wrote this. This is a long read. This was in Search Engine Land. It came out just a week or two ago. Maybe just a week.
If you get a chance, everybody in the audience, please go read this. It’s a long read. And it’s very in-depth, very well-researched. I think this is going to be one of the defining articles for the year. This is an important concept to understand.
Basically, he goes through helping you understand how SGE works, the Search Generative Experience. By having that understanding, you will be better able to present your content in a way that helps you along rather than hinders it.
And again, does it change the way you would think about your approach to things by the end of it? That’s something that I think would be interesting for 2024. As people come up with strategies where they have to adapt to all of these updates and the way in which AI is changing the way that people search, let alone the way people publish.
It’s one of those things that you didn’t think about when AI came out. I didn’t think it would really. You read about it. I’m not talking about the early parts where it’s been out. We know it’s been out for years in different ways, but the way in which it’s been more publicly out there.
Let’s just say since ChatGPT became famous, I didn’t think then. Is that going to change how I should think as an SEO or a client in the way that they publish any content or even think about getting discovered on a search engine?
I didn’t think about any of those things at the beginning. I just thought, oh, this is quite cool. It gives you information in a nice way. Very fast. It’s very elegant. All the technology is great. I just didn’t think that might change everything in how we do our jobs. You’ll feel smarter having read that article.
Generative AI-enhanced version of Google Assistant has also been announced. So it’s Google Assistant with Bard. And in theory, it’s going to help you. It’s like a virtual assistant, but we’ve had things like this before, like having that thing that was supposed to help you and hopefully, the technology is improved.
I feel like maybe originally this was the Mac notebook, Newton? Is that what it was called? And it was cool and you could write on it, but it didn’t go anywhere. But then later on, when the tablets came out and the iPads and iPhones, suddenly it was miraculous.
So, hopefully, technologies are progressing to make those virtual assistants more functional and slightly more helpful than they were before.
Yeah, because even though the technologies were there, I remember Trello when it first came out and they have some connections. You could have different connector API’s and someone invented a thing where they got the barcode scanner and put it in the fridge.
And then when they were out of something, they scanned it, and it then added itself as an item into Trello for them to buy again. That I thought was clever then but that’s still human intervention then. Now I think you should know how long it takes me to go through that item and then automatically add it.
Maybe if I want it next time maybe I don’t want to in a twice this month. Like that, but then I may need it if I’m thinking about making something with tuna in it which it would know about.
Now I know my, my phone asked me this morning if I would like to go to Starbucks, and if I would like my usual order and it offered to place the order for me, which I found creepy and confusing. So I said no.
But yeah, they’re making a lot of progress with everything.
Why don’t we move on to WordPress news and then try to blow through this quickly because we’ve got like six minutes left.
Changes to attachment pages that were announced. The attachment pages are now disabled by default. This is something that we’ve been at Yoast have been recommending that you do for a very long time, but now it is part of the WordPress core because we were so prescient and so smart.
Yeah, I mean, nothing even much to elaborate on. I mean, it’s just going to help a lot of sites and crawl budgets, which on the scale of WordPress will have quite a huge indents.
Yeah, WordPress 6.4 also supposedly has significant performance improvements. I have not personally experienced significant performance improvements yet, but I have other things I’m working through. If you’re using something like plugins, different plugins can drag that down, but I am hopeful that this is going to be something that will make people less whiny about how slow WordPress sites are.
It’s good that they’re doing all these improvements. I’ve seen some in some of the sites we’ve worked with. Again, there are so many variables. I might have been a little bit further that time or something might have been purged, but all of these little things do help, even if it’s like code saving time in kilobytes and again that that footprint on scale is bigger than we think.
That was it. So Yoast news. We’re blocking unwanted AI bots from your site. This is a new function that we’ve added, this feature is in Yoast SEO Premium.
We were working on it when we were chatting about some of the directives earlier. Some of that’s in there already definitely wanted to get that out there on sites. Some people have even requested it before we put it in. It skipped straight up there because we know how important it is to some of our users.
Edwin wrote a great article on whether you should block AI bots or not that is on our blog. So I don’t have a slide for that, but if you go to the Yoast blog, we have some guidance on how you should make that decision because it is a decision that needs to be made and it’s not a one-size-fits-all all kind of thing.
Q: Amanda asked, is there something we can do if we find our data scraped and posted on another site?
A: What can you do? Well, there’s a few things on there. You can file a DCMA report with Google to try to get that site, at least that content, to make that content not rank. The DCMA is probably the only thing that you can reasonably legally do.
There’s no way to stop someone from scraping your site if they are determined to scrape it. You can make it less enticing or less appetizing for them to scrape your site, or you can accept that it will happen.
And you can provide them with RSS feeds and embed links back to your content in those feeds,
That is what I do with many of my news sites because those sites will get scraped. It’s going to happen. So, I provide a full RSS feed and have links peppered into the full content that go back to other articles on our site.
So, if they scrape my content, I will use them to build backlinks. And that submission also sets automation in place, right? So it will say, well, this is your site, this is their site. And not the algorithm, but the system will be able to know, well, actually, you did publish that content first and they did not.
And therefore, it makes you the citation, if you want to call it that. And, you know, even they can help penalize that other site. Because if they keep doing it, they’ll just take that whole domain out and make it much harder.
If there are enough bad reports filed on them, they will, you know, Google will take them out. So if they’re notorious and open about it, just file many complaints. The trick with the complaints is you have to be authorized for that domain to make the complaint on behalf of your domain.
So if you will, if other sites are getting scraped, reach out to those other webmasters and say, hey, you go file a complaint, too. And we’ll all file complaints and get this guy taken out.
Q: Jeffrey van Dam asks, when do you think Apple will have this, which is the search engine online?
It seems like a long-term plan. It sounds like they’re working on it, but I don’t think they have plans to implement it or will not announce it because of the whole Google and 18 billion dollars a year. They will take a long, hard look at what they want to do next. But for what we want to do, nothing should change.
But if they’re going to do something, it’s not going to be for a good couple of years, at least to even hear anything about it. I don’t think because it would be huge. It would be if they’re going to do it, they’re going to they’re going to go hard on it, I think. Guns blazing. Not right now. Not on the horizon.
Topics & sources
Google news
- Apple has what it needs to launch its own Google replacement
- Google pays Apple $18B to $20B a year to keep its search in iPhone
- Google On Why Google Discover Traffic May Drop Or Increase
- Impact of the Google September 2023 helpful content was big for the SEO industry
- October 2023 Spam Update
- Google releases October 2023 broad core update
- Google-Extended Robots Directive Does Not Work For Search Generative Experience
- Google Testing New Recipe Search Interface
- Google’s AI-powered search experience can now generate images, write drafts
- Google Search officially stops indented results
- Google.com tests a news-filled homepage, just like Bing and Yahoo
- Google Rich Results Test now supports paywalled content
- Google rich results for events removed from search snippets
AI news
- Bing Explains SEO For AI Search
- How Search Generative Experience works and why retrieval-augmented generation is our future
- Google is launching a generative AI-enhanced version of Assistant
WordPress news
News presenters
Alex Moss
Alex is our Principal SEO. With a background in technical SEO, he has been working in Search since its infancy and also has years of knowledge of WordPress, developing several plugins over the years. He is involved within many aspects of Yoast from product roadmap to content strategy.
Carolyn Shelby
Carolyn is our Principal SEO. She leverages more than two decades of hands-on experience optimizing websites for maximum visibility and engagement. She specializes in enterprise and news SEO, and is passionate about demystifying the intricacies of search engine optimization for businesses of all sizes.