A recent update of Baidu Baike’s robots.txt – a file that tells search engine crawlers which uniform resource locators, commonly known as web addresses, can be accessed from a site – has outright blocked the ability of the Googlebot and Bingbot crawlers to index content from the Chinese platform.
That update appears to have been made some time on August 8, according to records on internet archive service the Wayback Machine. It also showed that earlier on the same day Baidu Baike still allowed Google and Bing to browse and index its online repository of nearly 30 million entries, with only part of its website designated as off limits.
That followed US social news aggregation platform and forum Reddit’s move in July, when it blocked various search engines, except Google, from indexing its online posts and discussions. Google has a multimillion dollar deal with Reddit that gives it the right to scrape the social media platform for data to train its AI services.
By comparison, the Chinese version of online encyclopaedia Wikipedia has 1.43 million entries to date, which are made accessible to search engine crawlers.
Following Baidu Baike’s robots.txt update, the Post’s survey of Google and Bing on Friday found many entries – probably from older cached content – from the Wikipedia-style service still come up in the US search platforms’ results.
Representatives from Baidu, Google and Microsoft did not immediately reply to requests for comment on Friday.
GenAI refers to the algorithms and services, such as ChatGPT, that are used to create new content, including audio, code, images, text, simulations and videos.
OpenAI, for example, in June forged a deal with American news magazine Time that gives it access to all the archived content from more than 100 years of the publication’s history.