Choose a version:
26% The original file has 621478 bytes (606.9k) and is available from the project website.
There you can find the official minified version, too, which brings down the size to 161310 bytes (157.5k, 26%).

After GZIP compression these minified files vary in size:
Boot
  63997 bytes (62.5k)
CDN
cdnhttps
  55595 bytes (54.3k)
CDN
cdnjs
  55480 bytes (54.2k)
CDN
gzip -6 (default)
  55220 bytes (53.9k)
local copy
gzip -9
  55166 bytes (53.9k)
local copy
libdeflate -12
  52578 bytes (51.3k)
local copy
7zip -mx=9 -tgzip
  52560 bytes (51.3k)
local copy
pigz -11 -n
  52508 bytes (51.3k)
local copy
kzip -s0 -rn -b1
  52502 bytes (51.3k)
local copy
Zopfli
  52326 bytes (51.1k)
local copy
Zopfli (defluff)
  52323 bytes (51.1k)
local copy

perma-link to the smallest file on my server:
http://minime.stephan-brumme.com/files/dojo/dojo-1.9.4.min.js

You will automatically get the smallest Dojo 1.9.4 file, ETag caching is available and
if your browser doesn't support GZIP decompression then the uncompressed version will be sent.

Currently best Zopfli settings

Save 3154 bytes by using my Dojo 1.9.4 Zopfli version instead of the best available CDN (6.03% smaller than cdnjs, 52326 vs. 55480 bytes):
You can use my super-compressed files for whatever purpose you like as long as you respect the library's original license agreement.
There are no restrictions from my side - but please avoid hot-linking if you run a high-traffic website.

These command-line settings yielded the best compression ratio so far (Linux version of zopfli-krzymod):
zopfli --i100000 --mb8 --mls2 --bsr13 --lazy --ohh

(found December 3, 2015)
Description Value Parameter
iterations 100000  --i100000
maximum blocks 8  --mb8
maximum length score 2  --mls2
block splitting recursion 13  --bsr13
lazy matching in LZ77 yes  --lazy
optimized Huffman headers yes  --ohh
initial random W for iterations 1  --rw1
initial random Z for iterations 2  --rz2

Even Smaller Files Thanks To Defluff

Zopfli's output can be further optimized by the defluff tool.
In this particular case, defluff saves 3 more bytes (52323 bytes).

Verify file integrity

After decompression, my uncompressed files are identical to the original ones:

MD5:
curl --silent --compressed http://download.dojotoolkit.org/release-1.9.4/dojo.js --location | md5sum
cc2610b04e984868222722dacfafbd7b  -
curl --silent --compressed http://minime.stephan-brumme.com/files/dojo/dojo-1.9.4.min.zopfli.js.gz | md5sum
cc2610b04e984868222722dacfafbd7b  -

SHA1:
curl --silent --compressed http://download.dojotoolkit.org/release-1.9.4/dojo.js --location | sha1sum
9567a3856083ba0d323e52412165cb6fd6e9a18a  -
curl --silent --compressed http://minime.stephan-brumme.com/files/dojo/dojo-1.9.4.min.zopfli.js.gz | sha1sum
9567a3856083ba0d323e52412165cb6fd6e9a18a  -

These CDNs send you the original file:
CDN Size (compressed) MD5 (uncompressed) Timestamp
Boot 63997 bytes cc2610b04e984868222722dacfafbd7b March 19, 2015 @ 15:43
cdnjs 55480 bytes cc2610b04e984868222722dacfafbd7b February 8, 2015 @ 14:45

And some CDNs send you a different file:
CDN Size (compressed) MD5 (uncompressed) Comment / Diff Timestamp
cdnhttps 55595 bytes f04a073f11928b375a4e31ef50f9e7f2 < /*
< Copyright (c) 2004-2011, The Dojo Foundation All Rights R [...]
< Available via Academic Free License >= 2.1 OR the modifie [...]
< see: http://dojotoolkit.org/license for details
< */
<
< /*
< This is an optimized version of Dojo, built for deploymen [...]
< development. To get sources and documentation, please vis [...]
<
[...]
December 24, 2015 @ 07:33

Note: only the MD5 hashes are shown to keep things simple.

Other Versions

Available Dojo versions at minime.stephan-brumme.com:

1.13.0,
1.12.3, 1.12.2, 1.12.1,
1.11.5, 1.11.4, 1.11.3, 1.11.2, 1.11.1, 1.11.0,
1.10.9, 1.10.8, 1.10.7, 1.10.6, 1.10.5, 1.10.4, 1.10.3, 1.10.2, 1.10.1, 1.10.0,
1.9.11, 1.9.10, 1.9.9, 1.9.8, 1.9.7, 1.9.6, 1.9.5, 1.9.4, 1.9.3, 1.9.2, 1.9.1, 1.9.0,
1.8.12, 1.8.11, 1.8.10, 1.8.9, 1.8.8, 1.8.7, 1.8.6, 1.8.5, 1.8.4, 1.8.3, 1.8.2, 1.8.1, 1.8.0,
1.7.10, 1.7.9, 1.7.8, 1.7.7, 1.7.6, 1.7.5, 1.7.4, 1.7.3, 1.7.2, 1.7.1, 1.7.0,
1.6.3, 1.6.2, 1.6.1, 1.6.0,
1.5.4, 1.5.3, 1.5.2, 1.5.1, 1.5.0,
1.4.6, 1.4.5, 1.4.4, 1.4.3, 1.4.2, 1.4.1, 1.4.0,
1.3.3, 1.3.2, 1.3.1, 1.3.0,
1.2.3

The project site contains an overview how well these versions were compressed.
Other interesting projects are AngularJS, BackboneJS, Bootstrap, D3, Ember, jQuery, Knockout, lodash, React, Socket.IO, ThreeJS, UnderscoreJS and Vue.

Changelog

Best Zopfli parameters so far:
Size Improvement Parameters Found
52326 bytes -8 bytes zopfli --i100000 --mls2 --bsr13 --lazy --ohh December 3, 2015 @ 23:06
52334 bytes -5 bytes zopfli --i10000 --mls8 --bsr30 --lazy --ohh September 24, 2015 @ 14:04
52339 bytes -6 bytes zopfli --i1000 --mls8 --bsr30 --lazy --ohh September 24, 2015 @ 13:11
52345 bytes -5 bytes zopfli --i1000 --mls2 --bsr12 --lazy --ohh September 22, 2015 @ 17:56
52350 bytes -2 bytes zopfli --i1000 --mls8 --bsr9 --lazy --ohh September 19, 2015 @ 02:33
52352 bytes -10 bytes zopfli --i1000 --mls8 --bsr8 --lazy --ohh September 19, 2015 @ 02:09
52362 bytes zopfli --i100 --mls8 --bsr9 --lazy --ohh September 18, 2015 @ 15:56

If there are multiple parameter sets yielding the same compressed size, only the first one found is shown.

Most recent activity on June 16, 2016 @ 14:40.

Heatmaps

This Zopfli heatmap visualizes how compression changes when modifying the --bsr and --mls parameter.
Cell's contents is the best filesize achieved (in bytes, hover with mouse over cells to see number of iterations).

Good parameters are green, bad are red. The best and worst are bold as well.
The brightness of the blue background color indicates how many iterations were processed:
10,000 or 100,000.
bsr \ mls
2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768
bsr \ mls
2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768
52408 52406 52404 52406 52407 52408 52406 52406 52405 52424 52402 52427 52423 52426 52407
52370 52357 52347 52399 52417 52409 52419 52412 52408 52406 52411 52424 52423 52411 52406
52359 52354 52358 52402 52400 52408 52402 52404 52406 52404 52416 52425 52418 52420 52413
52348 52348 52338 52399 52400 52405 52406 52419 52408 52406 52416 52423 52417 52414 52410
52345 52346 52346 52401 52401 52401 52405 52399 52404 52418 52414 52424 52414 52406 52412
52348 52348 52341 52399 52401 52402 52405 52401 52409 52415 52406 52423 52421 52415 52410
52367 52366 52359 52399 52403 52401 52401 52404 52407 52407 52414 52416 52423 52426 52411
52340 52345 52355 52401 52402 52401 52403 52405 52408 52404 52417 52424 52424 52424 52411
52326 52349 52357 52399 52401 52401 52401 52405 52407 52404 52416 52409 52423 52401 52418
52326 52341 52347 52399 52401 52401 52401 52399 52405 52403 52402 52423 52423 52412 52409
52346 52345 52422 52400 52403 52401 52400 52401 52407 52401 52416 52413 52424 52413 52410
52347 52341 52358 52409 52401 52401 52401 52412 52405 52400 52401 52423 52417 52405 52409
52354 52350 52349 52399 52401 52402 52406 52407 52406 52406 52406 52423 52420 52409 52409
52349 52349 52423 52403 52402 52403 52403 52401 52405 52415 52414 52422 52424 52403 52411
52329 52351 52350 52400 52402 52401 52401 52403 52403 52400 52407 52423 52422 52400 52410
52400 52400 52400 52400 52402 52401 52409 52399 52407 52400 52417 52423 52421 52405 52410
52346 52351 52357 52400 52403 52402 52401 52404 52408 52400 52416 52422 52423 52424 52410
52348 52351 52356 52400 52399 52401 52401 52401 52403 52415 52407 52423 52423 52405 52407
52355 52345 52357 52400 52402 52401 52406 52404 52406 52415 52401 52416 52425 52415 52414
52345 52341 52354 52400 52406 52402 52405 52401 52405 52416 52417 52422 52422 52415 52410
52352 52357 52350 52399 52402 52400 52405 52405 52407 52406 52412 52423 52422 52404 52410
52347 52350 52328 52399 52403 52401 52406 52403 52407 52417 52415 52422 52423 52405 52410
52348 52349 52353 52399 52402 52401 52400 52403 52408 52417 52415 52423 52422 52404 52412

Due to the Monte Carlo design of my search algorithm, not all parameters have reached the same number of iterations yet:
Iterations Min. Bytes Reduction Coverage
100 52349 bytes 100%
1,000 52339 bytes -10 bytes 100%
10,000 52334 bytes -5 bytes 100%
100,000 52326 bytes -8 bytes 2.61%
1,000,000
10,000,000

KZIP has far less options available for tuning/optimization. I only played around with the number of blocks (parameter -n):
Blocks Min. Bytes Compared To Best Zopfli Compared To Best KZIP
52568 bytes +242 bytes (+0.46%) +66 bytes
52502 bytes +176 bytes (+0.34%)
52519 bytes +193 bytes (+0.37%) +17 bytes
52555 bytes +229 bytes (+0.44%) +53 bytes
52595 bytes +269 bytes (+0.51%) +93 bytes
52618 bytes +292 bytes (+0.56%) +116 bytes
52628 bytes +302 bytes (+0.58%) +126 bytes
52651 bytes +325 bytes (+0.62%) +149 bytes
52676 bytes +350 bytes (+0.67%) +174 bytes

Non-DEFLATE Algorithms

Archivers based on completely different compression algorithms often produce superior results.
Unfortunately, browsers only support gzip compression at the moment.
Algorithm Program Parameters Size Compared To Best Zopfli
ZPAQ (Wikipedia) zpaq zpaq -method 69 38448 bytes -13878 bytes (-26.52%)
RAR (proprietary) RAR rar a -m5 -md64m -mc63:128t -mt1 45532 bytes -6794 bytes (-12.98%)
PPMd (Wikipedia) 7zip 7za a -mx=9 -m0=ppmd 46853 bytes -5473 bytes (-10.46%)
Brotli (Wikipedia) brotli brotli -q 11 47973 bytes -4353 bytes (-8.32%)
LZMA2 (Wikipedia) xz xz -9 49116 bytes -3210 bytes (-6.13%)
Burrows-Wheeler transform (Wikipedia) bzip2 bzip2 -9 50540 bytes -1786 bytes (-3.41%)
ZSTD (Wikipedia) zstd zstd -19 50595 bytes -1731 bytes (-3.31%)

Detailled Analysis

I wrote a DEFLATE decoder in Javascript. Click the button below to start a client-side analysis of the smallest gzipped files (may take a second):


Notes: pigz is a fast open source multi-threaded implementation of gzip written by one of the original authors of gzip.
However, when using compression level 11, pigz actually switches to the slower Zopfli algorithm and isn't multi-threaded anymore.
KrzyMOD's extensions to Zopfli offer the highest level of configuration and is therefore used for my brute-force search.
Ken Silverman wrote the closed-source KZIP compression program and Jonathon Fowler ported it to Linux.
Defluff was created by Joachim Henke; DeflOpt is a tool by Ben Jos Walbeehm.

website made by Stephan Brumme in 2015 and still improving in 2018.
all timestamps are displayed in central european time. see my changelog.
no flash, not even images or external css files - and everything squeezed into a single html file.
which was handsomely compressed before releasing it into the wild internet - obviously.

please visit my homepage and my blog, too.
email: minime (at) stephan-brumme.com