Unix Timestamps Explained: How Computers Track Time
Right now, as you're reading this, a counter is ticking away inside every server, database, and operating system on Earth. It started at zero on January 1, 1970, and it's been counting seconds ever since. That number (currently somewhere around 1.77 billion) is the Unix timestamp, and it's how most of the world's computers actually keep track of time.
It's a surprisingly elegant solution to a genuinely hard problem: how do you represent time in a way that every computer on the planet can agree on, regardless of time zone, calendar system, or locale? The answer turned out to be "just count the seconds."
What Is a Unix Timestamp?
A Unix timestamp (also called Epoch time or POSIX time) is the number of seconds that have elapsed since midnight UTC on January 1, 1970. That specific moment, 00:00:00 UTC on January 1, 1970, is called the Unix epoch.
Some examples to give you a feel for the numbers:
- 0 = January 1, 1970, 00:00:00 UTC (the epoch itself)
- 946684800 = January 1, 2000, 00:00:00 UTC (the millennium)
- 1000000000 = September 9, 2001, 01:46:40 UTC (the "billennium")
- 1700000000 = November 14, 2023, 22:13:20 UTC
- 2000000000 = May 18, 2033, 03:33:20 UTC (mark your calendar)
Negative timestamps represent dates before the epoch. -86400 is December 31, 1969 (one day, or 86,400 seconds, before the epoch). Most systems support this, though you'll occasionally hit software that doesn't handle pre-1970 dates correctly.
You can convert any timestamp to a readable date (or the other way around) with our Timestamp Converter.
Why Count from 1970?
The choice of 1970 as the starting point is mostly a practical accident. When Ken Thompson and Dennis Ritchie were building Unix at Bell Labs in the early 1970s, they needed a reference point for their timekeeping system. The original Unix time implementation used a 32-bit integer counting 1/60th-of-a-second intervals since the start of 1971. That overflowed too quickly, so they switched to full seconds and moved the epoch to 1970 because it was a nice round year that was recent enough to be useful.
There's nothing inherently special about January 1, 1970. Other systems use different epochs. Windows uses January 1, 1601. Mac OS Classic used January 1, 1904. GPS time starts on January 6, 1980. But Unix's choice became dominant because Unix itself became dominant. Linux, macOS, Android, iOS, and most web servers all trace their lineage back to Unix, and they all inherited its epoch.
How Computers Use Timestamps
Storing time as a single number has massive practical advantages. Comparing two dates? Just compare two numbers. A bigger number means a later date. Want to know how many days between two events? Subtract the timestamps and divide by 86,400. Need to sort records by date? Sort by a single numeric column. No parsing needed, no string comparisons, no format confusion.
Databases love timestamps for this reason. A 32-bit integer takes up 4 bytes of storage. The equivalent human-readable date string "2026-03-21T14:30:00Z" takes 20 bytes. When you have a table with 500 million rows, that difference matters. And numeric comparisons are significantly faster than string comparisons for the database engine.
Programming languages convert between timestamps and human-readable dates as needed. In JavaScript, Date.now() gives you the current timestamp in milliseconds. Python's time.time() returns it in seconds as a float. PHP's time() returns seconds as an integer. Under the hood, they're all counting from the same epoch.
Every log file, database record, and bank transaction is stamped with some variation of this counter. It's the shared heartbeat of digital infrastructure.
The Y2K38 Problem
Remember Y2K? The panic about computers storing years as two digits, so the year 2000 would look like 1900? We're heading toward a similar (and arguably more serious) problem on January 19, 2038, at 03:14:07 UTC.
Here's why. The original Unix timestamp was stored as a signed 32-bit integer. A signed 32-bit integer can hold values from -2,147,483,648 to 2,147,483,647. That maximum value, 2,147,483,647 seconds after January 1, 1970, lands on January 19, 2038, at 03:14:07 UTC. One second later, the counter overflows. On systems that haven't been updated, the timestamp wraps around to the maximum negative value, which the system interprets as December 13, 1901.
This isn't hypothetical speculation. It's a hard mathematical limit. Any system still using 32-bit timestamps in 2038 will either show wildly wrong dates, crash, or behave unpredictably. Financial calculations will break. Certificate expiration checks will fail. Scheduled tasks will misfire.
The fix is to move to 64-bit timestamps. A signed 64-bit integer can count up to 9,223,372,036,854,775,807 seconds from the epoch — that's about 292 billion years in the future, which should be enough. Most modern operating systems have already made this transition. Linux moved to 64-bit time internally years ago. macOS and Windows use 64-bit time. Newer versions of glibc (the C standard library on Linux) support 64-bit time on 32-bit hardware.
The danger is in embedded systems and legacy software. Old 32-bit devices that are still running — industrial controllers, older IoT hardware, legacy database systems — might not get updated. Some of these systems are deeply buried in critical infrastructure. It's the same pattern as Y2K: the core systems get fixed, but the forgotten machines in the corner cause unexpected problems.
Timestamps and Time Zones
One of the cleverest things about Unix timestamps is that they completely sidestep time zones. The timestamp 1700000000 means the exact same instant everywhere on Earth. Whether you're in Tokyo, London, or New York, that number refers to one specific moment in time.
Time zones are a display concern, not a storage concern. The timestamp gets stored as a universal number. When you need to show it to a human, you convert it to their local time zone at that point. This prevents an entire category of bugs that plague date/time handling.
Daylight saving time? Doesn't affect the timestamp. It's always counting UTC seconds. Political time zone changes (countries move their clocks all the time; Samoa once skipped an entire day in 2011)? The timestamp doesn't care. Historical calendar reforms? Irrelevant. The counter just counts.
This is why experienced developers store all times as UTC timestamps internally and only convert to local time at the presentation layer. It's the one rule that prevents the most time-related bugs, and it works because timestamps are inherently timezone-free.
Seconds vs. Milliseconds
Here's a common source of confusion: some systems use seconds since epoch, and others use milliseconds. JavaScript is the biggest offender. Date.now() returns milliseconds, so the current time looks like 1774300000000 instead of 1774300000. Java's System.currentTimeMillis() also uses milliseconds. Python and PHP use seconds.
How do you tell which you're looking at? Count the digits. A timestamp in seconds has 10 digits right now (and will until November 2286). A timestamp in milliseconds has 13 digits. If you see a 13-digit number, divide by 1,000 before treating it as a Unix timestamp in seconds.
Some APIs and databases use microseconds (16 digits) or nanoseconds (19 digits). PostgreSQL's timestamp type has microsecond precision internally. High-frequency trading systems and scientific instruments care about nanoseconds. For most web applications, second or millisecond precision is more than enough.
Reading Timestamps in the Wild
You'll encounter Unix timestamps in all sorts of places once you start looking:
API responses. Many APIs return dates as timestamps rather than formatted strings. Twitter, Slack, and Discord all use timestamps in their API responses. The field might be called "created_at," "timestamp," "ts," or just "time."
Database records. If you're looking at raw database rows, date columns stored as integers are almost certainly Unix timestamps. MySQL's UNIX_TIMESTAMP() function converts dates to this format, and FROM_UNIXTIME() converts back.
JSON data. API responses and config files often have timestamp fields. A JSON formatter can help you read the structure, but you'll need to convert the timestamp separately to know what date it represents.
File systems. Run ls -l on a Linux or Mac terminal and you see human-readable dates. But internally, the file system stores creation, modification, and access times as timestamps. The stat command shows you the raw numbers.
JWT tokens. The "iat" (issued at), "exp" (expiration), and "nbf" (not before) fields in Base64-decoded JWT tokens are all Unix timestamps. If you're debugging authentication issues, decoding these fields often reveals the problem — like an expired token or a clock skew between servers.
The Unix timestamp is one of computing's quiet successes. It's been running continuously since 1970, it works identically across every platform, and it reduces the incredible complexity of human timekeeping to a single incrementing number. Convert some timestamps yourself with our Timestamp Converter and see how the numbers map to the dates you know.
Ready to run your own numbers?
Try our free calculator and get instant results.
Try our Timestamp Converter →InstaCalcs Team
Free calculators and tools for everyday math.