Google has grown up with the mission to organize all information in the world and to make it universally accessible and usable. For this, Google indexes all possible data online with an automated program: Googlebot. Until recently, that program was always blind: it did not see websites the way users see them. Until recently. The latest Google developments at a glance.
The personal experience
Google has always focused on providing an optimal user experience. Websites that appear in an unnatural way between the search results do not give the user what they are looking for. That is why the Webspam team, led by Matt Cutts, is always busy with excluding unauthorized SEO practices. Focus on the user and the rest will come naturally, that's why Google's motto.
A good SEO practice is to make your website accessible to blind users. They browse the internet with voting software: it reads the title of a page, the headings, the links. The blind user calls a command to go to a link and the voting software reads everything on that page. For example, if a blind user is looking for the first date of the Cheese Market in Alkmaar, and he comes to a page with the title "Cheese Market," he does not immediately know if he is right; he must listen to the entire text. The title "Cheese Market Alkmaar" says much more. Calling places is a good SEO method (Venice update).
Googlebot sees formatting
Googlebot sees images
Previously there could be anything on an image, but Google only looked at the texts around it: the title of the file, the alternative text (alt-image) and the context of the page. A photo of a dog could thus be indexed like a photo of a cat. On November 17 Google announced that they have gotten a self-learning program (a "Neural Image Caption Generator") to describe photos accurately.
With this, blind users can get a much better user experience: they are no longer dependent on the texts around it. The challenge tackled by the researchers is not only what is in the photo, but also what happens in the photo: how do the objects relate to each other? Google already participated in the 'ImageNet large-scale visual recognition challenge' (ISLVRC2014) and gave clear to:
“These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand what is in an image as well as where things are. ”
What does that mean for SEO? No more hassle with the alt-image tag: if there is a dog in the photo, then there must also be 'dog' in the alt-image tag. Figments.com therefore expect an update in the Google algorithm quickly.
I'm not a robot
CAPTCHA. You know them: the number plates or extremely blurry numbers and letters that you can type. To determine whether you are a spam robot (who is just talking nonsense and freezing things up) or a real person looking for contact through a form. By typing in countless number plates and words, we help digitize photos and scans: Google knows the real house numbers of Google Streetview photos and scanned old books are made digitally searchable.
Now Google recognizes that Artificial Intelligence technology can accurately read up to 99.8% CAPTCHAs. Good news for everyone who is tired of retyping (especially if there is a mandatory field or incorrect data report that you have everything done again). For Google, it means a renewal of CAPTCHAs: a handy check box. With possibly an extra test: match the cat with cats.
New eyes of the user
And there you immediately see the challenge of Google: they themselves examine software that can do that too. If that software becomes freely available, and that is a matter of time thanks to the wonders of the internet, we will therefore have to deal with real data pollution. We can no longer distinguish real visitors from software. And it will become even more interesting with the arrival of intelligent personal assistants: software that gives you an assignment as a user: 'find the cheapest brand of jeans in size 36-34 and color dark blue'. The user then gets new eyes to bet on.