Binary, Octal, and Hexadecimal: Number Systems Every Developer Needs

Binary, Octal, and Hexadecimal: Number Systems Every Developer Needs

Every experienced developer has internalized a set of mental shortcuts for switching between number bases. You glance at 0xFF and read 255. You see 0755 in a file permission and know immediately what it means. You look at a hex dump and understand which bytes form which values. None of this is magic — it's a small set of patterns that become automatic with use.

Why Computers Use Binary

A transistor has two stable states: conducting current (on) or not conducting (off). It's the most reliable digital switch we can build, and we can fit billions of them on a chip. With only two states, the natural number system for hardware is base-2 (binary), where each digit is either 0 or 1.

Every number, character, instruction, and pixel your computer processes is ultimately represented as binary. An 8-bit byte can hold 256 distinct values (2⁸). A 32-bit integer can hold ~4.3 billion values. A 64-bit integer can hold ~18.4 quintillion values.

You rarely work with binary directly because it's verbose — 255 requires 8 binary digits: 11111111. But understanding the underlying structure unlocks everything else.

Converting Between Bases

The most useful mental operation is converting between binary, decimal, and hexadecimal.

Binary to decimal: Each bit position has a value that's a power of 2, starting from the right at 2⁰.

1 0 1 1 0 1 0 0
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
128 + 0 + 32 + 16 + 0 + 4 + 0 + 0 = 180

Decimal to binary: Repeatedly divide by 2 and collect the remainders from bottom to top.

180 ÷ 2 = 90 remainder 0
90  ÷ 2 = 45 remainder 0
45  ÷ 2 = 22 remainder 1
22  ÷ 2 = 11 remainder 0
11  ÷ 2 =  5 remainder 1
5   ÷ 2 =  2 remainder 1
2   ÷ 2 =  1 remainder 0
1   ÷ 2 =  0 remainder 1
Read upward: 10110100 = 180

For quick decimal-to-hex-to-binary conversions without paper, the Number Base Converter handles any base instantly.

Why Hex Is Everywhere

Hexadecimal (base-16) uses digits 0–9 and letters A–F. One hex digit represents exactly 4 bits. Two hex digits represent exactly 8 bits — one byte.

That alignment is why hex dominates in systems programming, networking, and debugging. Binary is unwieldy to read; decimal doesn't map cleanly to bit boundaries. Hex is the sweet spot.

Binary:   1111 1111
Hex:      F    F     = 0xFF = 255 decimal

Binary:   1010 0011
Hex:      A    3     = 0xA3 = 163 decimal

A few values worth memorizing because they appear constantly:

Hex Decimal Binary
0x0F 15 0000 1111
0xFF 255 1111 1111
0x80 128 1000 0000
0x7F 127 0111 1111
0x100 256 1 0000 0000

Hex prefixes vary by language: 0x in C, Python, JavaScript; # in CSS; U+ for Unicode code points; \x in string literals.

Octal and File Permissions

Octal (base-8) uses digits 0–7. One octal digit represents exactly 3 bits.

Octal was historically used on systems with 12-, 24-, or 36-bit words where grouping by 3 was natural. Its main modern relevance is Unix file permissions.

A file's permission bits are 9 bits: three groups of three bits for owner, group, and others. Each group: read (4), write (2), execute (1). Three bits = one octal digit.

chmod 755  →  111 101 101
              rwx r-x r-x
              7   5   5

chmod 644  →  110 100 100
              rw- r-- r--
              6   4   4

chmod 600  →  110 000 000
              rw- --- ---
              6   0   0

When you see 0755, the leading 0 is the octal prefix in C/Python. The mental arithmetic is fast once you know the bit patterns: 7 = full permissions, 5 = read+execute, 4 = read only, 6 = read+write.

Bit Manipulation Operators

These operators work directly on the binary representation of integers.

a = 0b10110100  # 180
b = 0b11110000  # 240

a & b   # AND:  10110000 = 176  (both bits must be 1)
a | b   # OR:   11110100 = 244  (either bit can be 1)
a ^ b   # XOR:  01000100 = 68   (bits differ → 1)
~a      # NOT:  result is -181 in Python (bitwise complement of 180 in two's complement)
a << 2  # LEFT SHIFT:  10110100 → 1011010000 = 720  (multiply by 4)
a >> 2  # RIGHT SHIFT: 10110100 → 00101101   = 45   (integer divide by 4)

Shifting left by n is equivalent to multiplying by 2ⁿ. Shifting right by n is integer division by 2ⁿ. These were once critical for performance; today compilers do this optimization automatically, but you'll encounter these patterns in legacy code, hardware registers, and low-level protocols.

Bitmasks and Flags

Bitmasks let you pack multiple boolean flags into a single integer. Each bit represents one flag. Testing and setting flags is done with AND and OR.

const READ    = 0b001  // 1
const WRITE   = 0b010  // 2
const EXECUTE = 0b100  // 4

let perms = READ | WRITE  // 0b011 = 3, has both flags set

// Test if a flag is set
(perms & READ) !== 0      // → true
(perms & EXECUTE) !== 0   // → false

// Set a flag
perms |= EXECUTE           // 0b111 = 7

// Clear a flag
perms &= ~WRITE            // 0b101 = 5

// Toggle a flag
perms ^= READ              // 0b100 = 4

This pattern appears everywhere: Unix permissions, network protocol flags, HTML input event modifier keys (event.ctrlKey, event.shiftKey), OpenGL rendering state, and database permission systems.

Hex in Colors

CSS and design tools use hex notation for RGB colors. #RRGGBB — two hex digits (one byte) per color channel.

#FF0000  → R=255, G=0,   B=0    (red)
#00FF00  → R=0,   G=255, B=0    (green)
#1A2B3C  → R=26,  G=43,  B=60   (dark blue-gray)

The shorthand #RGB expands each digit: #F0A#FF00AA. When you see a design with #E5E7EB, you can read it immediately: slightly below full white (0xFF = 255) in all channels, which means a light gray.

Alpha-channel hex adds a fourth byte: #RRGGBBAA. So #00000080 is black at 50% opacity (0x80 = 128, which is 128/255 ≈ 50%).

Reading a Hex Dump

A hex dump shows the raw bytes of a file alongside their printable ASCII representation. This is how you look at binary data without a specialized parser.

Offset  Hex bytes                                ASCII
00000000  25 50 44 46 2d 31 2e 34  0a 25 e2 e3 cf d3 0a 36  |%PDF-1.4.%.....6|

Each pair of hex digits is one byte. The ASCII column shows printable characters and dots for non-printable bytes. The first four bytes 25 50 44 46 are %PDF — the PDF magic number.

Hex dumps let you verify file signatures (magic bytes at the start of a file), inspect binary protocols, and debug serialization issues. On the command line, xxd and hexdump -C produce this output.

Practical Examples in Common Languages

# Python
bin(255)        # '0b11111111'
oct(255)        # '0o377'
hex(255)        # '0xff'
int('ff', 16)   # 255
int('11111111', 2)  # 255
f'{255:08b}'    # '11111111' (8-bit binary string)
f'{255:02x}'    # 'ff' (lowercase hex)
// JavaScript
(255).toString(2)   // '11111111'
(255).toString(8)   // '377'
(255).toString(16)  // 'ff'
parseInt('ff', 16)  // 255
parseInt('377', 8)  // 255
// C
printf("%d %o %x\n", 255, 255, 255);  // 255 377 ff
int x = 0xFF;     // hex literal
int y = 0377;     // octal literal (leading zero)
int z = 0b11111111;  // binary literal (GCC extension, C23 standard)

The Number Base Converter handles base conversion for any value instantly — useful when you need to cross-check a conversion or work with unusual bases beyond 2, 8, 10, and 16.

For understanding how these byte values connect to text encoding — why 0x41 is 'A' and why 0xE2 0x82 0xAC is '€' — see How UTF-8 and Unicode Work. For how binary data gets encoded for transmission as text (Base64), Encoding vs Encryption vs Hashing explains where that fits in the broader picture.

The Hash Generator and Base64 Encoder both output hex strings — once you can read hex fluently, their outputs become immediately interpretable rather than opaque character sequences.