Convert Z to XZ

Drag and drop files here or click to select.
Max file size 100mb.
Uploading progress:

Z vs XZ Format Comparison

Aspect Z (Source Format) XZ (Target Format)
Format Overview
Z
Unix Compress

Unix compress is the original Unix compression utility from 1984, developed by Spencer Thomas, Jim McKie, Steve Davies, Ken Turkowski, James A. Woods, and Joe Orost. It uses the LZW algorithm with an adaptive dictionary growing from 9 to 16 bits. Once ubiquitous on Unix systems, compress fell out of use after the Unisys LZW patent controversy and is now encountered only in legacy archives from the 1980s and early 1990s.

Legacy Lossless
XZ
XZ Utils (LZMA2)

XZ Utils is a modern compression suite using the LZMA2 algorithm, developed by Lasse Collin and Igor Pavlov since 2009. LZMA2 delivers the highest compression ratios among widely-available Unix tools, often producing files 30-50% smaller than gzip. XZ has become the standard for Linux kernel source distribution, package managers (dpkg, rpm), and any scenario where maximum compression justifies additional CPU time.

Modern Lossless
Technical Specifications
Algorithm: LZW (Lempel-Ziv-Welch)
Dictionary Size: 9 to 16 bits (adaptive)
Checksum: None
Multi-file: No — single file only
Extensions: .Z
Algorithm: LZMA2 (LZ77 + range coder)
Dictionary Size: 4 KB to 1.5 GB (configurable)
Checksum: CRC-32 or CRC-64 + SHA-256 optional
Multi-file: No — single file (use with tar for multiple)
Extensions: .xz, .lzma
Archive Features
  • Directory Support: No — single file only
  • Metadata: Original filename stored
  • Streaming: Yes — pipe-compatible
  • Recovery: No error recovery
  • Integrity: No checksums
  • Threading: Single-threaded only
  • Directory Support: No — single file (pair with tar)
  • Block Structure: Independent blocks for parallel decode
  • Streaming: Yes — full pipe support
  • Filters: BCJ filters for executable code optimization
  • Integrity: CRC-32/CRC-64 with optional SHA-256
  • Threading: Multi-threaded via pixz/pxz
Command Line Usage

The compress command from classic Unix:

# Compress a file
compress document.txt
# Result: document.txt.Z

# Decompress
uncompress document.txt.Z

# View contents without decompressing
zcat document.txt.Z

XZ Utils on modern Linux systems:

# Compress with default settings
xz document.txt
# Result: document.txt.xz

# Maximum compression
xz -9e document.txt

# Decompress
unxz document.txt.xz
Advantages
  • Was standard on every 1980s Unix system
  • Very fast decompression speed
  • Simple implementation
  • Gzip backward compatibility for decompression
  • Minimal CPU and memory requirements
  • Stable and well-documented binary format
  • Highest compression ratio among standard Unix tools
  • 30-50% smaller than gzip, 40-60% smaller than LZW
  • BCJ filters optimize compression of executables
  • SHA-256 integrity checking option
  • Standard for Linux kernel and distribution packages
  • Fast decompression despite high compression ratio
Disadvantages
  • Worst compression ratio among common formats
  • LZW patent caused industry-wide abandonment
  • No data integrity verification
  • Single file only — no directories
  • Not available on modern systems by default
  • Slowest compression among common Unix tools
  • High memory usage — up to 1.5 GB for maximum settings
  • Single file only — requires tar for archives
  • No native random access without indexed .xz
  • No encryption support
Common Uses
  • Legacy Unix system backups
  • Historical FTP and Usenet archives
  • Old compressed documentation
  • Archived software from 1980s-1990s
  • Some embedded systems
  • Linux kernel source distribution (tar.xz)
  • Debian/Ubuntu package compression (.deb)
  • Fedora/RHEL RPM packages
  • Large dataset archival for cold storage
  • Software release distribution
Best For
  • Accessing legacy .Z files from old systems
  • Processing historical compressed data
  • Systems limited to LZW compression
  • Historical Unix data research
  • Maximum compression for storage-constrained archives
  • Linux distribution package compression
  • Source code distribution where size matters
  • Cold storage archival of large datasets
Version History
Introduced: 1984 (Spencer Thomas et al.)
Algorithm: LZW (Terry Welch, 1984)
Status: Legacy — replaced in 1992
Patent: Unisys LZW patent expired June 2003
Introduced: 2009 (Lasse Collin, based on LZMA SDK)
Current Version: XZ Utils 5.6.x (2024)
Status: Active development, widely adopted
Evolution: compress → gzip → bzip2 → LZMA (2001) → XZ (2009)
Software Support
Windows: 7-Zip, WinRAR (extraction only)
macOS: gzip -d (backward compat)
Linux: gzip -d, ncompress package
Mobile: ZArchiver (Android)
Programming: Python subprocess, Perl Compress::LZW
Windows: 7-Zip, WinRAR, PeaZip
macOS: xz command (Homebrew), Keka
Linux: Built-in xz/unxz, file-roller, Ark
Mobile: ZArchiver (Android)
Programming: Python lzma, Java XZ for Java, liblzma

Why Convert Z to XZ?

Converting Z to XZ represents the maximum possible compression upgrade — jumping from the weakest widely-used compression algorithm (LZW, 1984) to the strongest (LZMA2, 2009). The LZMA2 algorithm used by XZ typically produces files 40-60% smaller than LZW-compressed equivalents, making this conversion the best choice when storage space or bandwidth are primary concerns. For a legacy archive that was 100 MB as .Z, the XZ version might be just 40-50 MB.

XZ has become the compression standard for the Linux ecosystem. The Linux kernel source is distributed as .tar.xz, Debian and Ubuntu use XZ for package compression, and Fedora/RHEL RPM packages have adopted XZ as their default. By converting legacy .Z files to .xz, you align with the modern Linux infrastructure, ensuring that standard tools like tar, dpkg, and rpm can process your archives without any additional configuration.

Despite its superior compression ratios, XZ provides fast decompression speeds that are comparable to gzip. The LZMA2 algorithm is asymmetric by design — compression is CPU-intensive, but decompression is efficient. This means that while the initial Z-to-XZ conversion takes more time, subsequent access to the compressed data is fast. This asymmetry makes XZ ideal for archives that are compressed once and decompressed many times.

XZ also offers advanced features unavailable in any predecessor format. BCJ (Branch/Call/Jump) filters can preprocess executable code before compression, significantly improving ratios on binaries and libraries. CRC-64 and optional SHA-256 integrity checks provide military-grade data verification. The block-based structure allows future parallel decompression implementations, ensuring the format remains relevant as hardware evolves.

Key Benefits of Converting Z to XZ:

  • Maximum Compression: 40-60% smaller files compared to LZW
  • Linux Standard: Default compression for kernel, dpkg, and RPM
  • Fast Decompression: Efficient decode despite high compression ratio
  • BCJ Filters: Specialized optimization for executable code
  • Strong Integrity: CRC-64 and optional SHA-256 verification
  • Active Development: Continuously improved by XZ Utils team
  • Future-Proof: Block structure enables parallel processing

Practical Examples

Example 1: Archiving Legacy Unix Distribution Media

Scenario: A computing museum is preserving original software distributions from 1980s Unix systems, converting .tar.Z archives to the most space-efficient format for digital preservation.

Source: bsd_4.3_dist.tar.Z (120 MB, original distribution)
Conversion: Z → XZ
Result: bsd_4.3_dist.tar.xz (48 MB)

Benefits:
✓ 60% reduction — XZ dramatically outperforms LZW
✓ SHA-256 integrity check for archival verification
✓ Standard .tar.xz format for long-term preservation
✓ Fast decompression when researchers need access
✓ BCJ filter improves compression of included binaries

Example 2: Reducing Cold Storage Costs

Scenario: An enterprise is moving 15 years of legacy .Z compressed backups to cloud cold storage (AWS Glacier) and wants to minimize storage costs.

Source: 2,500 .Z files totaling 500 GB
Conversion: Z → XZ (batch, -6 preset)
Result: 2,500 .xz files totaling 210 GB

Cost savings:
✓ 58% storage reduction = 290 GB saved
✓ At $0.004/GB/month (Glacier): $13.92/month saved
✓ $167/year in ongoing storage cost reduction
✓ CRC-64 checksums verify each file before upload
✓ One-time conversion, permanent savings

Example 3: Converting Kernel Patch Archives

Scenario: A kernel developer has collected historical Linux kernel patches from the 1990s stored as .Z files and wants to align them with the modern .tar.xz distribution format.

Source: linux-1.0.tar.Z (2.8 MB, original 1994 release)
Conversion: Z → XZ
Result: linux-1.0.tar.xz (1.1 MB)

Benefits:
✓ 61% smaller — LZMA2 excels on C source code
✓ Consistent .tar.xz format matching kernel.org
✓ Extractable with standard tar xJf command
✓ CRC-64 integrity verification for historical data
✓ Matches modern kernel distribution conventions

Frequently Asked Questions (FAQ)

Q: How much compression improvement can I expect from Z to XZ?

A: XZ typically produces files 40-60% smaller than the original .Z files. Text data (source code, logs, documentation) sees the largest gains, often 55-65% smaller. Binary data typically improves by 35-45%. The improvement comes from LZMA2's much larger dictionary sizes and more sophisticated matching algorithms compared to LZW.

Q: Will XZ conversion take a long time?

A: XZ compression is significantly slower than compress or gzip — it trades CPU time for compression ratio. However, this is a one-time cost during conversion. Decompression is fast, comparable to gzip speed. For large files, you can use lower presets (xz -1 through xz -3) for faster compression with still-excellent ratios, or xz -9e for maximum compression when time is not a constraint.

Q: How much memory does XZ compression require?

A: Memory usage depends on the compression preset. The default preset (-6) uses about 94 MB for compression and 9 MB for decompression. Maximum compression (-9e) can use up to 674 MB for compression but still only 65 MB for decompression. Lower presets use less memory: -1 uses just 9 MB. The original compress utility used minimal memory by comparison.

Q: Is XZ widely supported or is it too new?

A: XZ is fully mature and widely supported since 2009. It is installed by default on every major Linux distribution, is the standard format for Linux kernel distribution, and is used by dpkg (Debian/Ubuntu) and RPM (Fedora/RHEL) for package compression. On Windows, 7-Zip fully supports XZ. On macOS, XZ is available through Homebrew. Python includes the lzma module in its standard library.

Q: Is there data loss when converting Z to XZ?

A: No. Both formats use lossless compression. The conversion decompresses the LZW data and recompresses it with LZMA2. The file contents are bit-for-bit identical after extraction. XZ adds CRC-64 integrity verification that the .Z format lacks, providing additional data safety assurance.

Q: What are BCJ filters and should I use them?

A: BCJ (Branch/Call/Jump) filters are preprocessors that normalize relative addresses in executable code (x86, ARM, PowerPC, etc.) before compression. This improves compression of binaries, shared libraries, and object files by 5-15%. If your .Z file contains compiled programs or libraries from old Unix systems, BCJ filters can significantly improve the XZ compression ratio.

Q: Should I choose XZ over GZ or BZ2 for this conversion?

A: Choose XZ when file size is the primary concern — archival storage, bandwidth-limited distribution, or cold storage. Choose GZ for maximum compatibility and fastest decompression. Choose BZ2 for good compression with block-level error recovery. XZ provides the best compression ratio but requires more CPU time and memory during compression.

Q: Can I use multi-threaded XZ compression?

A: Yes. XZ Utils 5.2+ supports multi-threaded compression with the -T flag (e.g., "xz -T4" for 4 threads). Third-party tools like pixz and pxz also provide parallel compression. Multi-threading dramatically speeds up compression of large files while maintaining the same output format. Decompression of multi-threaded XZ files works with any standard xz tool.