DevToys Web Pro iconDevToys Web Proബ്ലോഗ്
വിവർത്തനം LocalePack logoLocalePack
ഞങ്ങളെ റേറ്റ് ചെയ്യുക:
ബ്രൗസർ എക്സ്റ്റൻഷൻ പരീക്ഷിക്കുക:
← Back to Blog

Hex to ASCII Guide: Reading Hex Dumps, Byte Encoding, and Practical Uses

8 min read

Hex encoding shows up everywhere in software development: hexdump output from binary files, SHA-256 digests printed as 64-character strings, Wireshark packet payloads, CSS color values, and UTF-8 byte sequences in protocol headers. Once you understand the relationship between bytes, ASCII, and their hex representations, you can read all of these at a glance. Use the Hex / ASCII Converter to follow along with the examples in this article.

Why Hex? Binary Is Too Long, Decimal Doesn't Align

A byte is 8 bits. In binary, the value 65 (the letter "A") is 01000001 — eight characters to represent a single byte. In decimal it is just 65, which looks compact until you try to reason about byte boundaries: the string "AB" becomes 6566, which requires you to split it manually at every two or three digits.

Hexadecimal solves both problems. Base 16 means each hex digit represents exactly 4 bits, so two hex digits represent exactly one byte. "AB" becomes 41 42. Byte boundaries are always visible. A 4-byte integer takes exactly 8 hex characters. There is no ambiguity about where one byte ends and the next begins.

This is why every low-level tool — debuggers, network analyzers, crypto libraries, file inspection utilities — defaults to hexadecimal output.

The ASCII Table: What the Ranges Mean

ASCII assigns a number from 0 to 127 to each character. Three ranges matter most in practice:

Range (hex)Range (decimal)NameExamples
0x00–0x1F0–31Control characters0x00 NUL, 0x09 TAB, 0x0A LF, 0x0D CR
0x20–0x7E32–126Printable charactersSpace (0x20), AZ (0x41–0x5A), az (0x61–0x7A), 09 (0x30–0x39)
0x7F127DELDelete control character — not printable
0x80–0xFF128–255Extended / multi-byteUTF-8 continuation bytes, Latin-1 extensions

A few values worth memorizing: uppercase letters start at 0x41 and lowercase at 0x61. The difference between a letter and its lowercase version is always 0x20 — which is why toggling bit 5 of an ASCII byte flips case. Digits 09 start at 0x30, so the digit n is 0x30 + n.

Anything below 0x20 or equal to 0x7F is a control character that will not render visibly. Hexdump tools show a dot (.) in the ASCII gutter for these bytes.

Converting Bytes to Hex

The conversion is mechanical: take each byte value (0–255), write it as a two-digit hexadecimal number, padding with a leading zero when the value is below 16. The string "Hello" encodes as:

CharacterDecimalHex
H7248
e10165
l1086C
l1086C
o1116F

Result: 48 65 6C 6C 6F (or 48656c6c6f without spaces).

Uppercase vs lowercase convention: Both 6C and 6c are valid. Uppercase is traditional in hardware documentation, protocol specs, and binary file formats. Lowercase is more common in cryptographic output (SHA256 digests, TLS fingerprints) and programming language defaults. The Hex / ASCII Converter accepts both and lets you choose the output case.

Reading Hex Dumps

Two tools produce hex dumps that you will encounter constantly: hexdump -C on macOS and Linux, and xxd (included with Vim). Both use the same three-column layout:

$ echo -n "Hello, world!" | xxd
00000000: 4865 6c6c 6f2c 2077 6f72 6c64 21        Hello, world!

The three columns are:

  • Offset — byte position from the start of the stream, written in hex. 00000000 means byte 0. The next line would start at 00000010 (16 in hex = 16 bytes per line).
  • Hex values — the bytes themselves, printed in groups of two (one byte). Spaces or groupings are cosmetic and do not affect the data.
  • ASCII gutter — the same bytes decoded as ASCII text. Non-printable bytes (control characters, bytes above 0x7E) appear as a dot.
$ hexdump -C /bin/ls | head -3
00000000  cf fa ed fe 07 00 00 01  00 00 00 00 02 00 00 00  |................|
00000010  12 00 00 00 90 0b 00 00  85 00 20 00 00 00 00 00  |.......... .....|
00000020  19 00 00 00 48 00 00 00  5f 5f 50 41 47 45 5a 45  |....H...__PAGEZE|

In this binary file the first four bytes are CF FA ED FE — the Mach-O magic number for a 64-bit little-endian executable on macOS. The ASCII gutter shows dots because those byte values fall outside the printable range. Further into the file, at offset 0x24, the text __PAGEZE is visible — a segment name stored as a plain ASCII string inside the binary.

Practical Uses

Packet Captures (Wireshark)

Wireshark displays packet payloads in the same three-column hex dump format. When debugging an HTTP or WebSocket payload, you can see the exact bytes on the wire. If a text payload looks garbled in the ASCII column, the bytes above 0x7F indicate a non-ASCII encoding — likely UTF-8 with multi-byte sequences.

Binary File Inspection

File format magic numbers live at the start of files. Common ones:

File typeMagic bytes (hex)ASCII
PNG image89 50 4E 47 0D 0A 1A 0A.PNG....
JPEG imageFF D8 FFnon-printable
PDF document25 50 44 46%PDF
ZIP archive50 4B 03 04PK..
ELF binary (Linux)7F 45 4C 46.ELF

Cryptographic Output: SHA-256 Hex vs Base64

A SHA-256 digest is 32 bytes. Printed as hex, it is 64 lowercase characters. Printed as Base64, it is 44 characters. Both represent the same 32 bytes — the encoding choice depends on context. Command-line tools and most APIs return hex by default: openssl dgst -sha256 outputs hex. Certificate fingerprints in browsers use colon-separated hex pairs. Base64 is preferred when the digest must travel inside a JSON field or HTTP header where compactness matters. See the Base64 in Practice article for more on that tradeoff.

CSS Hex Colors

CSS color values like #1A2B3C use the same hex encoding. The three byte pairs are RGB: red = 0x1A (26), green = 0x2B (43), blue = 0x3C (60). Eight-character forms add an alpha channel: #1A2B3CFF is fully opaque. The encoding is identical to any other byte-to-hex conversion — CSS just gives it a specific semantic meaning.

Code Examples

JavaScript / Node.js

// String to hex
Buffer.from('Hello').toString('hex');
// => '48656c6c6f'

// Hex to string
Buffer.from('48656c6c6f', 'hex').toString('utf8');
// => 'Hello'

// Uint8Array to hex (browser)
const bytes = new Uint8Array([0x48, 0x65, 0x6c, 0x6c, 0x6f]);
const hex = Array.from(bytes)
  .map(b => b.toString(16).padStart(2, '0'))
  .join('');
// => '48656c6c6f'

Python

# Bytes to hex
b'Hello'.hex()
# => '48656c6c6f'

# With separator
b'Hello'.hex(' ')
# => '48 65 6c 6c 6f'

# Hex to bytes
bytes.fromhex('48656c6c6f')
# => b'Hello'

# Hex to string
bytes.fromhex('48656c6c6f').decode('utf-8')
# => 'Hello'

xxd CLI — Hex Dump and Reverse

# Dump to hex
echo -n "Hello" | xxd -p
# => 48656c6c6f

# Reverse: hex back to binary
echo '48656c6c6f' | xxd -r -p
# => Hello

# Dump a file, 8 bytes per line
xxd -c 8 file.bin

Endianness Caveat

Hex encoding of a text string is straightforward: each character becomes its ASCII byte in order. Hex encoding of a multi-byte integer is not — byte order depends on endianness.

The 32-bit integer 305419896 (decimal) is 0x12345678 in hex. In memory on a little-endian system (x86, ARM in LE mode), the bytes are stored as 78 56 34 12 — least significant byte first. On a big-endian system (network byte order, most protocols), the same integer is stored as 12 34 56 78.

A hexdump of a compiled binary will show integers in the endianness of the target architecture. A hexdump of the string "1234" will show 31 32 33 34 regardless of platform, because strings are not integers and have no endianness. See the Number Base Conversion article for how this extends to other bases.

Non-ASCII Bytes: UTF-8 and BOM Detection

UTF-8 encodes Unicode code points above U+007F as sequences of two to four bytes, all with the high bit set (values 0x800xFF). In a hexdump, you will see these as two or more consecutive bytes above 0x7F, which the ASCII gutter renders as dots.

The euro sign (€, U+20AC) encodes as three bytes: E2 82 AC. In a hexdump:

$ echo -n "€" | xxd
00000000: e282 ac                                  ...

A UTF-8 BOM (Byte Order Mark) is the three-byte sequence EF BB BF at the start of a file. It is not required in UTF-8 (unlike UTF-16, where it indicates byte order), but some Windows tools add it. If a text file behaves unexpectedly — extra whitespace, broken JSON parsing, mismatched string comparisons — check the first three bytes in a hexdump. EF BB BF at offset 0 confirms a BOM is present.

# Check for UTF-8 BOM
xxd file.txt | head -1
# If first bytes are: ef bb bf  ...  the file has a BOM

# Strip the BOM with tail
tail -c +4 file-with-bom.txt > file-no-bom.txt

Convert hex to ASCII and back, inspect individual bytes, and check UTF-8 sequences directly in your browser with the Hex / ASCII Converter — no installation required, nothing leaves your machine.