Tools In
Browser

Top 15 Tools Every Backend & API Developer Should Bookmark

Developer ToolsToolsInBrowser··18 min read

Backend work looks different from the outside than it does from the inside. From the outside, it is about writing clean code, designing thoughtful APIs, and shipping reliable services. From the inside, it is mostly a thousand tiny tasks that have nothing to do with the interesting parts of the job. You spend your morning tracking down why a webhook signature does not validate. You spend your afternoon trying to remember the difference between a 403 and a 401, or which HTTP status code you are supposed to return when a resource exists but the user cannot access it. You spend the evening writing a regex for a validator, then another regex because the first one missed an edge case, then a third because you accidentally matched too much.

The tools that solve these micro-problems are not glamorous. Nobody brags about having a good HTTP status code reference open in a browser tab. But the compound effect of having the right small tools at your fingertips is enormous. Five minutes saved on a task you do ten times a day is fifty minutes back in your week, and a hundred minutes saved a week is ten hours a month reclaimed for actual engineering work instead of fighting boilerplate and looking up trivia.

Here are the 15 browser-based tools that every backend and API developer should have bookmarked and pinned. All free, all run entirely in your browser with no server uploads, no signup, no ads, and no permission prompts. Pin them once and stop searching for them every week.

JSON Validator

You cannot trust JSON coming from anywhere you did not write yourself. Third-party APIs return malformed JSON more often than anyone wants to admit. Configuration files get hand-edited and a trailing comma sneaks in. Logging systems emit JSON lines that get truncated when log buffers fill up. A frontend sends a payload that looks fine in their console but fails validation on your server for reasons neither of you can immediately identify.

A json validator is the first line of defense for any of these situations. Paste the JSON, and the validator tells you immediately whether the string is syntactically valid, and if not, exactly which line and column the parser gave up on. Compare that to the default experience of sending bad JSON to your parser, which gives you a cryptic error like “Unexpected token } at position 1423” and leaves you counting characters by hand to find the problem.

The validator also reports useful structure statistics: total depth, total key count, and size in bytes. That sounds trivial until you are debugging why a payload is 3 MB when it should be 30 KB, and the validator tells you that a single nested array has 47,000 entries because an upstream process is not paginating correctly. A json validator is the fastest way to answer “is this JSON even parseable” before you spend an hour debugging your code on the assumption that it is.

Pro tip: When a production bug involves a third-party webhook payload, paste the payload into a json validator before you touch any code. Nine times out of ten, the issue is not in your parser but in the upstream system sending you slightly-off JSON, and catching that early saves you from debugging your own correct code for hours.

JSON Schema Validator

Validating that JSON is syntactically correct is easy. Validating that JSON matches an expected shape is where real backend debugging lives. An API can return perfectly valid JSON that is still completely wrong because a required field is missing, a numeric field arrived as a string, or a nested object has the wrong key names. The parser accepts it without complaint. Your code then behaves unpredictably because it was written against assumptions about structure that this payload violates.

A json schema validator takes a schema definition (JSON Schema, the well-established standard used by OpenAPI, AJV, and dozens of other libraries) and a JSON document, and tells you exactly which fields pass and which fail. The error output points to the specific path in the document that failed, the rule that was violated, and the expected versus actual value. Instead of discovering a contract mismatch when your production error tracker fills up with null pointer exceptions, you discover it in thirty seconds during development.

Schema validation is also the fastest way to understand an unfamiliar API. Paste a response example and a schema, see which fields are actually required versus optional, and you immediately know which fields you can depend on in your code. A good json schema validator with detailed error reporting turns contract testing from a theoretical practice into something you actually do every time you integrate with a new endpoint.

Regex Tester

Regular expressions are unavoidable in backend work. Email validators, URL parsers, log extractors, input sanitizers, route matchers, and data cleaners all lean on regex because no other technique matches the expressiveness-to-code-size ratio for pattern matching. The problem is that writing regex without a live testing tool is a uniquely painful experience. You write what you think should work, run it, get nothing, tweak one character, run it again, get everything, tweak again, and iterate until either the regex works or you decide to solve the problem a different way.

A regex tester eliminates this iteration loop almost entirely. You paste sample input, type your pattern, and see every match highlighted in real time as you type. Capture groups are color-coded and shown explicitly so you can verify that the group you expect is capturing what you expect. When your pattern matches nothing, you can build it up character by character and see exactly which addition breaks the match. When it matches too much, you can see which parts of the input are being captured that should not be and add boundary constraints iteratively.

The speed difference is enormous. What used to take twenty or thirty iterations with print statements between each run takes three or four iterations in a regex tester, and you end up with a pattern you actually understand rather than one you reverse-engineered through trial and error. For any regex more complex than \d+, the tester pays for itself immediately.

UUID Generator

Unique identifiers are everywhere in backend code. Database primary keys, request correlation IDs, session tokens, idempotency keys, test fixture identifiers, event IDs, message queue entries, and dozens of other places where you need a value that is guaranteed not to collide with any other value in the system. UUIDs (version 4, the random-generated kind) are the default choice because they require no coordination across services, have vanishingly small collision probability, and work as primary keys in any reasonable database.

A uuid generator produces UUIDs in whatever format your context requires: standard hyphenated, uppercase, without hyphens, wrapped in braces, or with prefixes. That last point matters more than it sounds. Most databases accept hyphenated UUIDs, but some of your code might need the hyphens stripped for a URL path, or braces added for a specific format your ORM expects, or the whole thing uppercased for compatibility with a legacy system. Getting those variants by hand means either writing a utility function every time or copy-pasting and manually editing, which leads to typos.

For testing specifically, a uuid generator is invaluable when you need to seed a database with known, predictable fixtures, or when you need to construct a curl request that references a specific resource by ID. Generate five or ten UUIDs up front, paste them into your test data, and use them throughout the test flow. No more uuidgen in your terminal every thirty seconds, no more Node.js REPL just to generate a single ID.

ULID Generator

UUIDs are unique, but they are not ordered in any useful way. A UUID generated at 9:00 AM and a UUID generated at 10:00 AM look equally random, which means if you store events, logs, or any time-series data keyed by UUID, you cannot sort them by ID to get chronological order. ULIDs solve this problem by combining a timestamp prefix with random bits. They are 26 characters, URL-safe, lexicographically sortable (so sorting by string gives you chronological order), and still have essentially zero collision probability.

A ulid generator produces time-sortable IDs that you can use anywhere UUIDs would go, with the added benefit that ordering actually means something. This is particularly valuable for event sourcing architectures, log aggregation systems, append-only tables, and any database index where range queries by recency are common. You get the “insert a new event, fetch the last 50 events” pattern for essentially free because ordering by primary key gives you chronological ordering.

For development, the practical use of a ulid generator is often to generate known fixture IDs that sort in a predictable order for tests. If you need test data representing events that happened at specific times, generating ULIDs with controlled timestamps makes the test assertions much cleaner than trying to coordinate separate timestamp and UUID fields.

HMAC Generator

HMAC is the quiet workhorse of API authentication. Every time a webhook arrives from Stripe, GitHub, Shopify, or any other platform that sends signed events, HMAC is how you verify that the payload actually came from that platform and was not tampered with in transit. Every time you implement JWT with an HS256 signature, HMAC is the signing algorithm. Every time you build an API that needs to accept signed requests without the overhead of full asymmetric cryptography, HMAC is the default choice.

An hmac generator lets you generate and verify HMAC signatures across SHA-1, SHA-256, SHA-384, and SHA-512 algorithms, which covers essentially every HMAC variant you will encounter in practice. The most common development use case is reproducing a signature that a webhook provider claims to have sent, so you can compare it against what your code computes and figure out why they do not match. Nine times out of ten the answer is something boring: you are signing the raw bytes but they are signing the UTF-8 encoded string, or you are including trailing whitespace, or your secret has a trailing newline from how it was copied out of a secrets manager.

Being able to paste the payload, paste the secret, pick the algorithm, and see what the expected signature should be takes this debugging from a multi-hour ordeal to a five-minute investigation. A good hmac generator is one of those tools you do not realize you need until you are deep in webhook signature hell, at which point you would pay real money for it.

Bcrypt Hash Generator

Storing passwords correctly is a solved problem, and the answer is: use bcrypt, scrypt, or Argon2, never a raw hash algorithm. Bcrypt is the most widely supported option and the default choice for most web frameworks. It has a configurable cost factor that lets you tune how expensive the hashing is, which is important because you want hashing to be slow enough to make brute-force attacks impractical but fast enough that legitimate logins do not time out.

A bcrypt hash generator lets you produce bcrypt hashes from plaintext passwords with any cost factor you want. The primary use case in backend development is creating test fixtures: seeding a test database with users whose passwords are known so you can write login tests, populating a staging environment with accounts for QA, or reproducing a specific hash to verify that your login flow accepts it correctly.

The secondary use case is evaluating cost factors. Every time hardware gets faster, the cost factor that was appropriate five years ago becomes inadequate. A bcrypt hash generator with live cost adjustment lets you time different cost factors on your own hardware, which gives you a concrete sense of what your login endpoint’s latency contribution from hashing will be. Cost factor 12 that took 250ms on 2020 hardware takes 100ms on 2026 hardware, and knowing that difference lets you make informed decisions about whether to bump your cost factor during your next security review.

Curl Command Generator

Curl is the universal API testing tool. Every backend developer uses it dozens of times a day to hit endpoints, verify responses, and debug integration issues. The problem is that curl’s flag syntax is genuinely user-hostile. Building a curl command with custom headers, authentication, a JSON body, and the right content type requires remembering whether it is -H or --header, whether -d sends form data or raw JSON, whether -X POST is needed if you pass -d, and a dozen other small details that you never quite memorize because you use each of them slightly differently every time.

A curl command generator gives you a form-style interface for building curl commands. You fill in the URL, pick the HTTP method, add headers via a key-value UI, paste the body, choose the authentication type, and the tool outputs a correctly formatted curl command you can paste straight into your terminal. Reproducing a failing API call from a bug report goes from a ten-minute exercise in remembering curl flags to thirty seconds of filling in a form.

The generated commands also work as documentation. Paste a curl command into a bug ticket and anyone else can reproduce your exact call without needing to interpret your prose description of “call the POST endpoint with the X-Auth header and a JSON body containing blah blah.” A curl command generator makes that reproducibility effortless.

Cron Expression Generator

Cron syntax is one of those legacies from Unix that developers have to deal with forever, even on systems that have nothing to do with Unix anymore. Kubernetes CronJobs, GitHub Actions schedules, AWS EventBridge rules, serverless schedulers, and dozens of other modern systems all use cron syntax for scheduling. And cron syntax is cryptic. */15 * * * * means every 15 minutes, 0 */4 * * * means every four hours on the hour, and 0 2 * * 1-5 means 2 AM on weekdays. Getting any of these wrong means either your job never runs or it runs a thousand times when it should run once.

A cron expression generator lets you build cron expressions through a visual interface. Pick your minute, hour, day-of-month, month, and day-of-week values from dropdowns or presets, and see the resulting expression along with a human-readable description of when it will run and a list of the next few execution times. Instead of writing 0 2 * * 1-5 and hoping you got the day-of-week numbering right (is Sunday 0 or 7? Trick question, both work), you pick “weekdays at 2 AM” from presets and the tool gives you an expression you can paste with confidence.

For debugging existing cron expressions, the generator works in reverse too. Paste a mysterious expression from an existing system and see both a human description and the next few times it will fire. This is particularly useful when you inherit a cron job that you need to modify slightly, and you want to verify you understand what the existing schedule does before changing it. A cron expression generator turns a cryptic five-field string into something you can actually reason about.

SQL Formatter

SQL queries grow. A simple SELECT becomes a join, the join becomes a subquery, the subquery becomes a CTE, the CTE becomes a window function on top of a union, and suddenly you have a 300-line query that looks like a single paragraph of dense text because your ORM generated it on one line and your code review tool does not wrap. Reading that query is painful. Reviewing it is worse. Modifying it is asking for bugs.

A sql formatter reformats any SQL into a clean, indented, keyword-capitalized structure that is actually readable. SELECT clauses get separated from their FROM clauses. JOINs are aligned. Nested queries are indented with visual hierarchy. WHERE conditions are broken onto separate lines. The result is SQL you can read without wanting to quit the profession.

The most common use case for a sql formatter is cleaning up queries pulled from ORM-generated logs or query plans. When a query is slow, you paste the generated SQL into the formatter, read through the clean version, identify the part that is problematic, and optimize. When reviewing someone else’s migration or complex report query, you paste their SQL into the formatter before reading it. The formatting does not change what the query does, but it changes how much of your mental budget is spent parsing syntax versus thinking about logic. That difference matters when you are looking at SQL you did not write.

JSON to SQL Converter

Seeding databases is an annoying but constant backend task. You have sample data from an API, a fixture file, or a production export, and you need to turn it into CREATE TABLE and INSERT statements that you can run against your development or test database. Writing those statements by hand is tedious, and writing a one-off script to do the conversion is overkill for a one-time job that you will need to do again next week with slightly different data.

A json to sql converter takes a JSON array of objects and produces CREATE TABLE and INSERT statements in whatever SQL dialect you need: PostgreSQL, MySQL, SQLite, or SQL Server. Type detection is automatic: numbers become numeric columns, booleans become boolean columns, strings become varchar or text, and nested objects get flattened or serialized based on your preference. The output is clean SQL you can run directly or adapt further.

The most valuable use case is converting production API responses into local test data. Pull a sample response from your production API (scrubbed of sensitive data), paste it into a json to sql converter, and get SQL that populates your local database with realistic test fixtures. The alternative is either writing seed scripts by hand or maintaining a separate test data pipeline, both of which are significantly more work than a thirty-second tool conversion.

HTTP Status Code Reference

HTTP status codes are both well-defined and widely misused. Every backend developer learns 200, 201, 301, 400, 401, 403, 404, and 500, and then hits a scenario where the right answer is 409 or 422 or 451, at which point they either pick something close or look up the full list. The full list is surprisingly long and the semantic differences between similar codes matter more than they appear. A 401 versus 403 is a meaningful distinction that affects how clients retry. A 409 versus 422 says different things about what the client should do next.

A http status code reference gives you the complete list organized by category, with descriptions of each code and the situations where it is appropriate. Instead of googling “difference between 401 and 403” for the fiftieth time, you have a single reference that answers the question in context. The 4xx section covers all the client error codes, the 5xx section covers server errors, and edge cases like 418, 451, and 511 get proper explanations for when they actually apply.

For API design specifically, a good http status code reference prevents bad decisions. Developers who do not reference the spec tend to return 500 for everything that is not 200, or 400 for every client error regardless of what specifically went wrong. Picking the right status code based on the actual semantics makes your API meaningfully better for clients who want to handle errors intelligently.

HTTP Header Viewer

HTTP headers are where most API debugging actually happens. Authentication is in headers. Caching decisions are in headers. CORS negotiation is entirely in headers. Content types, rate limit information, session cookies, tracing IDs, security directives, and compression choices all live in headers. When an API call fails in a way that the body does not explain, the headers almost always have the answer.

A http header viewer takes pasted request or response headers and displays them with known-header descriptions, so you can quickly understand what each one means. Authentication headers like Authorization, WWW-Authenticate, and Proxy-Authenticate are explained. Caching headers like Cache-Control, ETag, and Vary are explained with their implications. CORS headers are parsed and annotated so you can see whether your preflight actually allows the method and origin you need.

The most common development use case for a http header viewer is debugging CORS. Every backend developer has spent an afternoon trying to figure out why their browser is blocking a request that works fine from curl, and almost always the answer is a specific header that is either missing or misconfigured. Pasting the actual response headers into a viewer and seeing the annotations makes the mismatch obvious in a way that staring at the raw text does not.

JSON to YAML Converter

Modern infrastructure runs on YAML. Kubernetes manifests, GitHub Actions workflows, Docker Compose files, OpenAPI specs, CI/CD pipelines, and countless configuration systems all use YAML as the canonical format. But a lot of the data that needs to go into those files comes from JSON sources: API responses, existing JSON configs that need to be migrated, exported data from tools that only emit JSON. Converting between the two by hand is error-prone because YAML is indentation-sensitive in ways that introduce subtle bugs.

A json to yaml converter handles the conversion correctly, including the parts that are easy to get wrong: proper indentation, correct handling of special characters that need quoting, preservation of key order when that matters, and correct representation of null, boolean, and numeric types. The output is YAML you can paste directly into your Kubernetes manifest or GitHub Actions workflow without having to manually fix indentation or worry about whether a string needs quotes.

The reverse conversion is equally useful. When you are debugging a Kubernetes pod that will not start and you need to compare the live manifest against what you committed, a json to yaml converter lets you flip between formats easily so you can cross-reference against the JSON output of kubectl get -o json. Flipping between JSON and YAML representations of the same data is one of the most common small tasks in modern backend and DevOps work, and having a reliable tool for it removes a surprising amount of friction.

Random API Key Generator

Every backend service that exposes an API to external clients needs API keys. Every integration with a third-party service needs API keys. Every development environment needs test API keys that look realistic but are clearly not production values. Generating these manually means either using a password generator that outputs values that do not match the conventional format for API keys, or writing one-off code every time, neither of which is a good use of time.

A random api key generator produces API keys in realistic formats with configurable prefixes, character sets, and lengths. You can generate keys like sk_live_abc123... or pk_test_xyz789... that follow the Stripe-style convention, or api_... style keys, or any other prefix pattern your system uses. The keys are cryptographically random (using crypto.getRandomValues), so they are suitable for real use, not just placeholder values.

The prefix convention matters more than it looks. Prefixed keys are self-documenting: sk_live_ immediately tells you this is a secret key for production, pk_test_ immediately tells you this is a publishable key for testing. Using a random api key generator that supports custom prefixes lets you maintain this convention in your own APIs without having to hand-construct keys or maintain a separate generation utility. When a key leaks in a log or error message, the prefix instantly tells you the severity.

Conclusion

Backend and API work is mostly composed of small, unglamorous tasks that repeat endlessly: validating payloads, generating IDs, hashing passwords, building curl commands, authoring cron schedules, formatting SQL, converting formats, looking up status codes, and debugging headers. None of these tasks are hard individually. None of them are interesting individually. Cumulatively, they are where your day goes if you do not have good tools.

The best backend engineers do not solve these problems faster by being smarter. They solve them faster by having a short list of tools that make each one a thirty-second task instead of a five-minute task. Pin these 15, use them every day, and redirect the saved time toward the actual engineering work that makes your service better: clean architecture, sensible abstractions, thorough testing, good error handling, and thoughtful API design. The boring tasks do not go away, but they stop taking over the day.

← More Developer Tools posts