Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Speaking of Go, if you want compile-time type checking like what SQLx offers, the Go ecosystem has an option that is arguably even better at it:

https://sqlc.dev/

It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)





It's possible to run sqlx in 'offline' mode that uses your schema to do the checks so you don't need a live database. That's a popular option in CI/CD scenarios.

It's absolutely core to SQLx. I'm surprised to hear that that isn't widely known based on the parent. The first time I used SQLx has to be 4 or 5 years ago and they had it back then.

Well, it hurts that it isn't the default. The README still tells you to set the environment variable, it just isn't the "default" way to do things. In my opinion it would be better to entirely remove support for connecting to the database during compilation. Does anyone actually want to use it that way?

Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)


"default"? they are explicitly different macros with different syntax. use whichever you prefer.

I think that, if a new user is going to encounter an error, it should be that SQLx couldn't talk to the database rather than that a mysterious file doesn't exist. They're going to need to connect to a dev database either way. They can learn about caching the schema information when they come to those later steps like building a CI pipeline. Early in a project, when your queries and schema are unstable, caching isn't going to be very useful anyway, since you'll be invalidating it constantly.

The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).


1. I can't tell you how unconvinced I am with the error being less confusing. A good error message tells you what's wrong and ideally what to do to remedy it if possible... and to me there isn't really a practical difference between "set this environment variable" and "run this command". It seems like you basically add one extra step, but you prevent people from choosing a probably suboptimal workflow that they almost certainly don't want to use anyways... Either way, I don't think it's more confusing, and for someone new it's better to only have one way to do something, especially if it's the obviously superior thing anyways.

2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.

If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.


The error message is a fair point, I do still think that making caching the default is premature.

I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)


This is not quite the same thing, because it requires `sqlx prepare` to be run first; and that talks to the database to get type information. In SQLC, on the other hand, query parsing and type inference is implemented from first principles, in pure Go.

sqlc's approach has its limitations. Its SQLite query parser is generated from an ANTLR grammar, and I've encountered situations where valid SQLite syntax was rejected by sqlc due to their parser failing.

Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.

For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).


I believe sqlc can also connect to the database for type inference now too, fwiw.

Offline query caching is great. The team has made it work fantastically for workspace oriented monorepos too.

I ran sqlx / mysql on a 6M MAU Actix-Web website with 100kqps at peak with relatively complex transactions and queries. It was rock solid.

I'm currently using sqlx on the backend and on the desktop (Tauri with sqlite).

In my humble opinion, sqlx is the best, safest, most performant, and most Rustful way of writing SQL. The ORMs just aren't quite there.

I wish other Rust client libraries were as nice as sqlx. I consider sqlx to be one of Rust's essential crates.


You seem to know your stuff. What's your opinion of diesel?

Implementing the parsing and type checking logic in pure Go is not an unqualified advantage. As you point out, it means that SQLC "...essentially reimplements database features..." and in my experience, it does not reimplement all of them.

Maintainer of sqlc here. Thanks for the kind words! I'm considering switching to the sqlx model of talking to a running database simply because trying to re-implement PostgreSQL internals has been a huge challenge. It works for most queries, but for the long tail of features, it's a losing battle.

Can you tell me why it's a non-starter for you?


I think it's only a non-starter for me in SQLx if not using query caching. Caching makes the situation workable.

For sqlc, it isn't really a big problem because you only need to run the code generation when you're actually modifying database things. Still, with that having been said, I think just passing a database URI and having analysis work based on that is unideal. Using an actual database isn't a huge problem, but having to manage the database instance out of band is the part that I think isn't great, because it allows for the schema in the code to trivially desync with the schema used in analysis. If I used SQLx I'd probably be compelled to try to wire up a solution that spawns the database and migrates it up hermetically for the caching part. Likewise if I used this mode of sqlc.

I guess it might be possible for sqlc to add first class support for that sort of concept, but I can see holes in it. For one thing, you have to figure out where to grab binaries from and what version. An approach using Docker/Podman works, and at least partly solves this problem because you could allow specifying any OCI image, but that has caveats too, like requiring Docker or Podman to be installed. The most heroic effort would be to use some kind of solution using WASM builds of database engines: pulling down and running something like PGlite in process seems like it would be an almost ideal solution, but it sticks you to whatever things can actually be made to work in WASM in terms of features, extensions and versions, at least unless/until database servers and extension vendors miraculously decide that supporting WASM as a target is a good idea. Still, if you want some crazy ideas for how to make the UX better, I think either the Docker approach or the WASM approach could be made to work to some degree.

Barring that, though, I'd be most likely to have some kind of Docker setup for running sqlc with an ephemeral database instance. It's not pretty, but it works...

I don't think it would be a non-starter, though. I only really think that connecting to the database from within rustc invocations is a non-starter.


I never gelled with how SQLC needs to know about your schema via the schema file. I'm used to flyway where you can update the schema as long as it's versioned correctly such that running all the sets of flyways will produce the same db schema.

I referred go-jet since it introspects the database for it's code generation instead.


The way I prefer to use sqlc is in combination with a schema migration framework like goose. It actually is able to read the migration files and infer the schema directly without needing an actual database. This seems to work well in production.

That's how I'm using it as well (though I'm using some simple migration code instead of a framework): https://github.com/bbkane/enventory/tree/master/app/sqliteco...

I've been quite happy with this setup!


> with SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available.

FWIW, the compile-time query checking is entirely optional. If you don't use the query syntax checking then you don't need live database and you don't need `sqlx prepare`.


I spent 2 weeks trying to build a very basic rest crud API with SQLc and it was not better. I had to shift to SQLx because of how unintuitive SQLc was.

We've been running SQLC in production for a while now and I'm curious which part of it you found unintuitive? We run ours as a container service within the development environment that will compile your code from a postgres dump file. We've had no issues with it at all after the initial configuration guidelines for SQLC, though the documentation certainly isn't exactly great. Hell, I'm not sure I've ever worked with a better SQL to language tool in my decades so I'm surprised that it isn't working out for you.

That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).

If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.


It's quite simple really. I want to write a query and have a concrete object as it's return type. The framework that gets me there in the least amount of steps is going to be more intuitive.

Let's compare: SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want

SQLx - env: DATABASE_URL

Now does that mean that SQLx is the best possible database framework. No, it does not. Because I didn't spend my time doing things that weren't related to the exact queries I had to write I got more work done.

I want to appreciate the hard work the SQLx Devs have put in to push the bar for a decent SQL developer experience. People give them a really hard time for certain design decisions, pending features and bugs. I've seen multiple comments calling it's compile time query validation "gimmicky" and that's not nice at all. You can go to any other language and you won't find another framework that is as easy to get started with.


> SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want

I would recommend using pg_dump for your schema file which means it'll not be related to SQLC as such. This way it will be easier for you to maintain your DB, we use Goose as an example. In our setup part of the pipeline is that you write your Goose migration, and then there is an automated process which will update the DB running in your local dev DB container, do a pg_dump from that and then our dev container instance of SQLC will compile your schema for you.

The configuration file is centralized as well, so you don't have to worry about it.

I agree with you on the SQLC meta language on queries, I appreciate that it's there but we tend to avoid using it. I personally still consider the meta language a beter way of doing things than in-code SQL queries. This is a philosophical sort of thing of course, and I respect that not everyone agres with me on this. It's hard for me to comment on SQLx, however, as I haven't really used it.

What I like about SQLC is that it can be completely de-coupled from your Go code.


Maybe I'm drinking the sqlc Kool aid, but because I'm already using migration files, setting up the config to point to them and a folder of SQL queries was pretty painless.

And of course now that I have it, the incremental cost of adding a new query is really low as well


That's all understandable. But like I said I did spend 2 weeks working with SQLc, however when I compared it to just writing the query in my code, the developer experience was miles apart.

You could compare it to people writing CSS, JavaScript and Markup in separate files Vs having just one file in React/Svelte etc. which gives the user the option to combine everything into one.

There maybe a lot of drawbacks from the latter approach but it's makes everything a hell easier for people to just get started building.


We're into Go and SQLC by extension because we write systems with 0 dependencies outside of the standard library. Which is a security and compliance thing.

As far as building something fast, I'm with you. I always reach out for Python with UV, Litestar and Advanced Alchemy when I want to build personal web projects. I don't think SQLC is bad as such, once you've written your SQL you can essentially compile that into a CRUD application which is ready to go. As you've pointed out, however, you'd need to slam something like a GraphQL engine on top of it if you wanted rich quries easily, and you'd still not have the auto-generated OpenAPI that comes with Python web frameworks.

SQLC is for code where you want a low amount (or zero) external depedencies. Which is a very "Go" thing to want. It does scale well, but that requires you to build various CLI tools to help maintain things as well as your own Go modules to add "quality of life" like dynamic routers and get queries for low traffic requests.

I'll try SQLx eventually when I get time to look more into Rust.


Interesting - I've had the opposite experience. I usually prefer rust for personal projects, but when I recently tried to use SQLx with sqlite, lots of very basic patterns presented problems, and I wished I had sqlc back.

I love it's compile time query validation.

This is why I like using NodeJS or Python with SQL, it's very simple to have it not care about the return types. SQL is already statically typed per se, I don't need to re-assert everything. Achieving the same kind of automation in Go etc requires parsing the schema at compile-time like what you described, which is complicated.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: