It's possible to run sqlx in 'offline' mode that uses your schema to do the checks so you don't need a live database. That's a popular option in CI/CD scenarios.
It's absolutely core to SQLx. I'm surprised to hear that that isn't widely known based on the parent. The first time I used SQLx has to be 4 or 5 years ago and they had it back then.
Well, it hurts that it isn't the default. The README still tells you to set the environment variable, it just isn't the "default" way to do things. In my opinion it would be better to entirely remove support for connecting to the database during compilation. Does anyone actually want to use it that way?
Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)
I think that, if a new user is going to encounter an error, it should be that SQLx couldn't talk to the database rather than that a mysterious file doesn't exist. They're going to need to connect to a dev database either way. They can learn about caching the schema information when they come to those later steps like building a CI pipeline. Early in a project, when your queries and schema are unstable, caching isn't going to be very useful anyway, since you'll be invalidating it constantly.
The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).
1. I can't tell you how unconvinced I am with the error being less confusing. A good error message tells you what's wrong and ideally what to do to remedy it if possible... and to me there isn't really a practical difference between "set this environment variable" and "run this command". It seems like you basically add one extra step, but you prevent people from choosing a probably suboptimal workflow that they almost certainly don't want to use anyways... Either way, I don't think it's more confusing, and for someone new it's better to only have one way to do something, especially if it's the obviously superior thing anyways.
2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.
If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.
The error message is a fair point, I do still think that making caching the default is premature.
I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)
This is not quite the same thing, because it requires `sqlx prepare` to be run first; and that talks to the database to get type information. In SQLC, on the other hand, query parsing and type inference is implemented from first principles, in pure Go.
sqlc's approach has its limitations. Its SQLite query parser is generated from an ANTLR grammar, and I've encountered situations where valid SQLite syntax was rejected by sqlc due to their parser failing.
Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.
For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).