Could Google passage indexing be leveraging BERT? - Google made an announcement regarding a new update called BERT a year ago. Google BERT was released in October 2019, and it is a machine learning update. It helps Google to understand queries and content better. During the initial update, it was just 10% of English questions. The initial BERT search update was for text extraction and summarization in featured snippets. After the announcement of BERT production search, the rollout started in many countries.
The October 2019 announcement collapsed the SEO world. According to Google, the update is the biggest leap forward in the past five years. It was the most important announcement made. There are continuous developments related to BERT after 12 months.
BERT is a basic idea that uses bi-directional pre-training on a context window of words from an extensive text collection. BERT is a foundation and can also be used for more granular tasks. The scope of a context window is a concept that provides a context window that is 10 words long. The target word is at the 6th position in a context window that has 10 words.
On October 25, 2019, the production search announcement followed a frenetic BERT-focused year in the language research community. Between 2018 and 2019, all Sesame Street character manner named BERT-type models includes ERNIE from Baidu. Even Facebook and Microsoft are also busy building BERT-like models.
The SEO world is now quiet about BERT. During 2019 and 2020, there has been a huge development in AI and natural language understanding that make SEOs up on their BERT-stalking game. BERT is like a reference to a methodology now used in parts of the search and the machine learning languages field. It has resulted in greater improvements to Google than the improvements seen in the last decade.
Google has also revealed some exciting new features now regarding search and also has made improvements to mis-spelling algorithms, image technology, conversational agents, and humming to Google Assistant. With the help of the new technology, it would be better to detect, identify, and understand keywords on a web page. You can find the best answer in a single passage or paragraph.
When coming to the “passage indexing” announcement, it resulted in some confusion in the SEO community. Some SEOs questioned whether the individual passages are added to the index rather than individual pages.
BERT has started featuring everywhere after the October 2019 announcement. BERT depends on a self-attention mechanism, and therefore each word can gain context. More combination words are required to be focused during the training to gain a full context of a word. As said, “bigger is definitely better,” Jacob Devlin, one of the original BERT authors, confirms the effect of the model on a presentation with a slide saying, “Big models help a lot.”
Big BERT models improve upon SOTA (State of the Art). Skyscraper SEO identifies what a competitor does to gain something bigger or better. Big BERT-type models work in the same way to develop adding more parameters and training more data. If you still have any queries regarding Google passage indexing be leveraging BERT update, call us.