How to Use This Tool
Set Default Access
Choose whether all search engine crawlers are allowed or refused by default. Most sites should start with 'Allow All' and then restrict specific directories.
Add Paths and Sitemap
Enter directories to block in Disallow Paths (like /admin/ or /private/). Add your sitemap URL so search engines can find it. Set a crawl delay if your server needs it.
Configure Individual Bots
Override the default for specific crawlers. For example, allow all bots but refuse Baidu, or refuse all but allow Google. Each bot has Default, Allow, and Refuse options.
Copy or Download
The robots.txt preview updates live as you make changes. Copy the output to clipboard or download it as a robots.txt file. Upload it to your website's root directory.
Frequently Asked Questions
What is a robots.txt file and why do I need one?
▼
A robots.txt file is a plain text file placed in your website's root directory that tells search engine crawlers which pages or directories they can or cannot access. It helps control how search engines index your site, prevents crawling of private or duplicate content, and can improve crawl efficiency.
Where should I place the robots.txt file on my website?
▼
The robots.txt file must be placed in the root directory of your domain. For example, https://example.com/robots.txt. Search engines only look for it at this exact location. Placing it in a subdirectory will not work.
What does User-agent mean in robots.txt?
▼
User-agent identifies which search engine crawler the rules apply to. 'User-agent: *' means the rules apply to all crawlers. You can also specify individual crawlers like 'User-agent: Googlebot' to create rules that only apply to Google's crawler.
What is the difference between Allow and Disallow?
▼
Disallow tells crawlers not to access a specific path. Allow overrides a Disallow rule for a more specific path. For example, you can Disallow /admin/ but Allow /admin/public/. Allow is mainly used to create exceptions within broader Disallow rules.
What is Crawl-Delay in robots.txt?
▼
Crawl-Delay tells crawlers to wait a specified number of seconds between requests. This prevents your server from being overwhelmed by too many crawler requests at once. Note that Google does not support Crawl-Delay — use Google Search Console to control Google's crawl rate instead.
Should I add my sitemap URL to robots.txt?
▼
Yes. Adding a Sitemap directive helps search engines discover your sitemap automatically. The format is 'Sitemap: https://example.com/sitemap.xml'. This is especially useful if your sitemap is not linked from your homepage.
Can robots.txt block pages from appearing in Google search results?
▼
Robots.txt prevents crawling but does not prevent indexing. If other pages link to a disallowed page, Google may still index the URL (without content). To fully prevent indexing, use a 'noindex' meta tag or X-Robots-Tag header instead.
What happens if I block all crawlers with Disallow /?
▼
Blocking all crawlers with 'Disallow: /' prevents search engines from crawling any page on your site. This effectively removes your site from search results over time. Only do this if you intentionally want to de-index your entire site.
Do all search engines follow robots.txt rules?
▼
Major search engines like Google, Bing, Yahoo, and Baidu follow robots.txt rules. However, robots.txt is a guideline, not a security measure. Malicious bots may ignore it. Never rely on robots.txt to protect sensitive data — use authentication instead.
Is this robots.txt generator free to use?
▼
Yes. This generator is completely free, runs entirely in your browser, and requires no registration. Your configuration is never sent to any server.
Free Robots.txt File Generator — Create and Download Your Robots.txt Instantly
A robots.txt file is a small but critical text file that controls how search engine crawlers interact with your website. It tells bots like Googlebot, Bingbot, and others which pages they can access and which they should skip. Every website that wants to manage its search engine visibility needs a properly configured robots.txt file in its root directory.
This free robots.txt generator lets you create a complete robots.txt file without writing a single line of code. Set default access rules for all crawlers, configure individual bots independently, add disallow and allow paths, specify your sitemap URL, and set crawl delay — all through a visual interface. The generated file updates in real time as you make changes, and you can copy or download it instantly. Whether you are launching a new website, fixing SEO issues, blocking admin directories from search results, or controlling which search engines can crawl your content, this tool gives you a properly formatted robots.txt file in seconds with no sign-up required.
Features Explained
Default Access Control
▼
Set whether all search engine crawlers are allowed or refused by default with a single toggle. 'Allow All' lets every bot crawl your site unless specifically restricted. 'Refuse All' blocks everything — useful for staging sites or sites under development.
12 Individual Bot Controls
▼
Override the default setting for specific search engine crawlers including Google, Bing, Yahoo, Baidu, Yandex, DuckDuckGo, Naver, and others. Each bot has three options: Default (follows the global rule), Allow, or Refuse.
Disallow Paths
▼
Enter directories or URLs you want to block from crawling, one per line. Common examples include /admin/, /private/, /tmp/, and /cgi-bin/. These paths are relative to your domain root.
Allow Paths (Overrides)
▼
Add paths that should remain crawlable even within a broader Disallow rule. For example, disallow /admin/ but allow /admin/public/. This gives you fine-grained control over crawler access.
Crawl-Delay Setting
▼
Specify how many seconds crawlers should wait between requests. Options range from no delay to 120 seconds. Note that Google ignores Crawl-Delay — use Google Search Console for Google-specific rate limiting.
Sitemap URL Directive
▼
Enter your sitemap URL to include a Sitemap directive in the generated file. This helps search engines discover your sitemap automatically without relying on other discovery methods.
Live Preview
▼
The generated robots.txt content updates in real time as you change any setting. No need to click a generate button — see the exact output immediately as you configure options.
Copy to Clipboard
▼
One-click copy of the complete generated robots.txt content. Paste it directly into your hosting file manager, FTP client, or code editor.
Download as File
▼
Download the generated content as a robots.txt file ready to upload to your website's root directory. No manual file creation needed.
Reset to Defaults
▼
Clear all settings and start fresh with a single click. Resets access rules, bot settings, paths, sitemap, and crawl delay to their default values.
Who Is This Tool For?
Website Owners
Create a properly formatted robots.txt file for your website without learning the syntax. Control how search engines crawl and index your pages.
SEO Specialists
Generate robots.txt files for client websites as part of technical SEO audits. Quickly configure crawler access rules for optimal indexing.
Web Developers
Generate robots.txt during site deployment instead of writing it manually. Avoid syntax errors that could accidentally block important pages.
WordPress Site Owners
Create a custom robots.txt to replace or supplement the default WordPress robots.txt. Block wp-admin, wp-includes, or plugin directories from crawlers.
E-commerce Store Owners
Prevent search engines from crawling cart pages, checkout flows, user account pages, and internal search results that create duplicate content.
Bloggers
Block tag pages, author archives, or draft directories from being indexed. Keep search engine focus on your actual content.
Marketing Teams
Ensure landing pages are crawlable while blocking tracking URLs, campaign parameters, and internal tools from search results.
DevOps Engineers
Generate robots.txt as part of deployment pipelines. Download the file and include it in build artifacts automatically.
Freelance Web Designers
Quickly create robots.txt files for every client project. Download and deploy alongside the finished website.
People Launching New Websites
Get your robots.txt right from day one. Allow search engines to index your content and find your sitemap immediately after launch.
Staging Site Managers
Block all crawlers from staging and development environments. Prevent test content from appearing in search results.
Agency Teams Managing Multiple Sites
Generate robots.txt files for different clients with different requirements. Copy or download each one independently.
Content Managers
Block directories containing PDFs, downloads, or internal documents that should not appear in search results.
Forum and Community Admins
Block user profile pages, search result pages, and private sections from being indexed by search engines.
People Fixing SEO Issues
If Google Search Console reports crawl issues, check and regenerate your robots.txt to ensure the correct pages are accessible.
Students Learning SEO
Understand how robots.txt directives work by experimenting with the generator and seeing the output update in real time.
Portfolio Website Owners
Allow crawlers to index your portfolio pages while blocking admin panels, draft projects, and client-only content.
SaaS Product Owners
Block application dashboards, API endpoints, and user-specific pages from search engines while allowing marketing pages.
Photographers and Artists
Control which image directories search engines can crawl. Allow portfolio galleries while blocking high-resolution originals.
News and Media Sites
Manage crawler access to archive pages, AMP versions, and syndicated content to prevent duplicate content issues.
Non-Profit Organizations
Ensure your mission-critical pages are indexed while blocking donor portals, admin areas, and internal documents.
People Migrating Websites
Update robots.txt during domain migrations to ensure old paths are handled and new paths are crawlable.
Multi-Language Site Owners
Control crawling of language-specific directories. Ensure the correct regional versions of your site are indexed.
Anyone Who Needs a Robots.txt File
If your website does not have a robots.txt file yet, this tool creates one in seconds. No coding knowledge required.
Tips for Using This Tool
Start with Allow All
Most websites should allow all crawlers by default and selectively block specific directories. Starting with Refuse All and then allowing paths is harder to manage and more error-prone.
Always add your sitemap
Including your sitemap URL in robots.txt helps search engines discover all your pages. This is especially important for new sites or sites with complex navigation.
Block admin and private directories
Add /admin/, /wp-admin/, /cgi-bin/, /private/, and any other directories that contain sensitive or irrelevant content to the Disallow paths.
Test after uploading
After uploading your robots.txt file, use Google Search Console's robots.txt Tester to verify that important pages are still crawlable and restricted pages are blocked.
Do not use robots.txt for security
Robots.txt is a guideline that well-behaved crawlers follow. It does not prevent access. Never use it to hide sensitive information — use authentication and access controls instead.
Be careful with Disallow /
Disallowing the root path (/) blocks everything. Double-check that this is intentional. For staging sites, use Refuse All. For live sites, use specific path restrictions.
Use the Download button for deployment
Download the generated file and upload it to your web server's root directory. The file must be accessible at https://yourdomain.com/robots.txt.
Google ignores Crawl-Delay
If you need to slow down Google's crawl rate specifically, use the Crawl Rate setting in Google Search Console instead. Crawl-Delay only works for Bing, Yandex, and some other crawlers.
Review the preview before copying
The live preview shows exactly what your robots.txt file will contain. Scan through it to make sure the directives match your intentions before copying or downloading.
Keep robots.txt simple
Complex robots.txt files are harder to maintain and debug. Use broad rules with a few specific overrides rather than dozens of individual path restrictions.
Privacy & Security
This robots.txt generator runs 100% in your browser. Your website URL, directory paths, and crawler settings are never sent to any server, stored in any database, or shared with any third party. All generation happens locally on your device.
No cookies, no analytics, no registration required. Your SEO configuration stays completely private. Close the tab and everything is gone.