This is the first post in a series about problems that I encountered while working with rather big PostgreSQL databases. I will describe in it what are my assumptions about the database size and also some facts from PostgreSQL documentation.
How big table or database have to be to be considered really “big”?
There’s no hard definition in that topic, so there are only my assumptions.
About a single table, I’d say that it can be considered big when it approaches 100 million rows. But of course, it depends on the row size itself. Things will be completely different for a table with two or three simple integers in contrary to, let’s say twenty text columns loaded with heavy data.
On the database topic – in my opinion, 100 GB is a quite large database. But, again, it depends, on how many tables are there and how heavy single rows are.
What are the PostgreSQL limits?
Now we are in a much better situation, as they’re documented in the official PostgreSQL FAQ. So, after the FAQ:
- Maximum size for a database? Unlimited (32 TB databases exist)
- Maximum size for a table? 32 TB
- Maximum size for a row? 400 GB
- Maximum size for a field? 1 GB
- Maximum number of rows in a table? unlimited
- Maximum number of columns in a table? 250-1600, depending on column types
- Maximum number of indexes on a table? unlimited
That’s all for today
It wasn’t a very long post, was it? But no worries, the next articles in the series will be more substantial (at least I hope so 🙂 ). This article is more like a common point to have something to refer to in subsequent posts.