What is the translation of " GOOGLEBOT " in Chinese?

Examples of using Googlebot in English and their translations into Chinese

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
How Googlebot sees your website.
Googlebot如何访问您的网站.
Remove URLs already crawled by Googlebot.
删除已经被Googlebot抓取的URL.
How Googlebot views your pages.
了解Googlebot是如何查看网页的.
Txt page,you can tell search engine bots(and specifically Googlebot) to avoid some pages.
Txt页面,您可以告诉搜索引擎漫游器(特别是百度)以避免某些页面。
Googlebot can't access your site.
Googlebot可能找不到您的网站.
If it“notices” that the server can't deal with page overload, Googlebot slows down or stops crawling.
如果它“注意到”服务器无法处理页面过载,Googlebot会减慢或停止抓取。
If Googlebot finds a robots.
如果Googlebot找到某个网站的robots.
Its performed by the software, called a crawler or a spider(or Googlebot, in case of Google).
这个工作是由软件来执行,被成为爬虫或蜘蛛(或者Googlebot,因为它与Google有关)。
Googlebot cannot access your site.
Googlebot无法连接到你的网站。
The intention behind the hidden links is to be crawled by Googlebot, but they are unreadable to humans because:.
隐藏链接的是拟由Googlebot抓取的链接,但对人类是不可读的,因为:.
Googlebot can't access your site.
Googlebot可能无法访问您的网站.
This task is performed by software,called a crawler or a spider(or Googlebot, as is the case with Google).
这个工作是由软件来执行,被成为爬虫或蜘蛛(或者Googlebot,因为它与Google有关)。
In general you want Googlebot to access your site so your web pages can be found by people searching on Google.
一般来说,你是愿意让Googlebot访问你的网站,这样你的网页才可以被人们在谷歌搜到。
So if you want to tell this spider what to do,a relatively simple User-agent: Googlebot line will do the trick.
因此,如果你想告诉这个蜘蛛该做什么,一个相对简单的User-agent:Googlebot线就可以了。
While Googlebot will not be able to crawl disallowed pages, they may be a significant part of your site's user experience.
尽管Googlebot无法抓取这些网页,但是它们依然是网站用户体验的重要组成部分。
Now, however, Google will use its Smartphone Googlebot to crawl, index, and rank the mobile version of the site as well.
但是,Google现在将使用其智能手机Googlebot抓取,索引和排名该网站的移动版本。
Googlebot can typically read Flash files and extract the text and links in them, but the structure and context are missing.
Googlebot往往可以读Flash文件,并提取里面的文本和链接,但忽略文件的结构和上下文。
Indicates a session ID,you may want to exclude all URLs that contain them to ensure Googlebot doesn't crawl duplicate pages.
表示一个会话ID,您可排除所有包含该ID的网址,确保Googlebot不会抓取重复的网页。
Txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP.”.
Txt规则的实际经验,这些规则由Googlebot和其他主要爬虫以及大约5亿依赖REP的网站使用。
Google's John Mueller discourages websites from linking to every page from the home page,saying it may prevent Googlebot from clearly understanding a site's architecture.
Google的JohnMueller不鼓励网站链接到主页上的每个页面,称这可能会阻止Googlebot清楚地了解网站的架构。
After you fetch a URL as Googlebot, if the fetch is successful, you will now see the option to submit that URL to our index.
如果您像Googlebot那样成功抓取了一个URL,那么,您将会看到提交该URL到我们的索引这一选项。
Googlebot,(and most other crawlers) will only obey the rules under the more specific user-agent line, and will ignore all others.
Googlebot(以及大多数其他抓取工具)只会遵守更具体的用户代理行下的规则,并会忽略所有其他规则。
Language- dependent crawling- here, the Googlebot begins to crawl by using an Accept-Language HTTP header within the request.
基于语言的抓取:Googlebot将开始在抓取时使用包含Accept-LanguageHTTPheader的请求。
Googlebot visits each of the websites it detecting links(SRC& HREF) on each page and adds them to its list of pages to crawl.
Googlebot在访问其中的每个网站时,会检测各网页上的链接(SRC和HREF),并将这些链接添加到要抓取的网页列表。
In the above case,you are disallowing the user agent called Googlebot from crawling/nogooglebot/ and all contents below this directory.
在上例中,您禁止名为Googlebot的UserAgent抓取/nogooglebot/以及该目录下的所有内容。
For Googlebot, we do not have any preference and recommend that webmasters consider their users when deciding on their redirection policy.
对于Googlebot,我们对各种政策没有任何偏好,并建议网站站长在决定重定向政策时以用户为出发点。
If you're worried about rogue bots using the Googlebot user-agent, we offer a way to verify whether a crawler is actually Googlebot.
O如果您担心使用Googlebot的代理程序,我们提供了一个方法来验证该抓取工具是否为Googlebot
Googlebot[37] is described in some detail, but the reference is only about an early version of its architecture, which was based in C++ and Python.
GoogleCrawler(BrinandPage,1998)用了一些细节来描述,但是这些细节仅仅是关于使用C++和Python编写的、一个早期版本的体系结构。
Looking at these Googlebot limitations, it seems to be unfair to assess performance without the capabilities to discern whether the website is fast or slow.
看看这些Googlebot限制,如果没有辨别网站是快还是慢的能力,评估性能似乎是不公平的。
Txt file, to block Googlebot from crawling all pages under a particular directory(for example, private), you would use the following robots. txt entry:.
Txt文件以阻止Googlebot抓取某一特定目录下(例如,private)的所有网页,可使用以下robots.txt条目:.
Results: 187, Time: 0.0226

Top dictionary queries

English - Chinese