Choose a version:
26% The original file has 619502 bytes (605.0k) and is available from the project website.
There you can find the official minified version, too, which brings down the size to 160737 bytes (157.0k, 26%).

After GZIP compression these minified files vary in size:
Boot
  63796 bytes (62.3k)
CDN
cdnhttps
  55417 bytes (54.1k)
CDN
cdnjs
  55316 bytes (54.0k)
CDN
gzip -6 (default)
  55066 bytes (53.8k)
local copy
gzip -9
  55009 bytes (53.7k)
local copy
7zip -mx=9 -tgzip
  52430 bytes (51.2k)
local copy
libdeflate -12
  52419 bytes (51.2k)
local copy
pigz -11 -n
  52380 bytes (51.2k)
local copy
kzip -s0 -rn -b1
  52379 bytes (51.2k)
local copy
Zopfli
  52218 bytes (51.0k)
local copy
Zopfli (defluff)
  52215 bytes (51.0k)
local copy

perma-link to the smallest file on my server:
http://minime.stephan-brumme.com/files/dojo/dojo-1.9.3.min.js

You will automatically get the smallest Dojo 1.9.3 file, ETag caching is available and
if your browser doesn't support GZIP decompression then the uncompressed version will be sent.

Currently best Zopfli settings

Save 3098 bytes by using my Dojo 1.9.3 Zopfli version instead of the best available CDN (5.93% smaller than cdnjs, 52218 vs. 55316 bytes):
You can use my super-compressed files for whatever purpose you like as long as you respect the library's original license agreement.
There are no restrictions from my side - but please avoid hot-linking if you run a high-traffic website.

These command-line settings yielded the best compression ratio so far (Linux version of zopfli-krzymod):
zopfli --i100000 --mb8 --mls2 --bsr7 --lazy --ohh

(found December 3, 2015)
Description Value Parameter
iterations 100000  --i100000
maximum blocks 8  --mb8
maximum length score 2  --mls2
block splitting recursion 7  --bsr7
lazy matching in LZ77 yes  --lazy
optimized Huffman headers yes  --ohh
initial random W for iterations 1  --rw1
initial random Z for iterations 2  --rz2

Even Smaller Files Thanks To Defluff

Zopfli's output can be further optimized by the defluff tool.
In this particular case, defluff saves 3 more bytes (52215 bytes).

Verify file integrity

After decompression, my uncompressed files are identical to the original ones:

MD5:
curl --silent --compressed http://download.dojotoolkit.org/release-1.9.3/dojo.js --location | md5sum
74969b3e4d512389462576e87cd89f5e  -
curl --silent --compressed http://minime.stephan-brumme.com/files/dojo/dojo-1.9.3.min.zopfli.js.gz | md5sum
74969b3e4d512389462576e87cd89f5e  -

SHA1:
curl --silent --compressed http://download.dojotoolkit.org/release-1.9.3/dojo.js --location | sha1sum
2462ca7c70647853b69cd5808a6a32e476ae9567  -
curl --silent --compressed http://minime.stephan-brumme.com/files/dojo/dojo-1.9.3.min.zopfli.js.gz | sha1sum
2462ca7c70647853b69cd5808a6a32e476ae9567  -

These CDNs send you the original file:
CDN Size (compressed) MD5 (uncompressed) Timestamp
Boot 63796 bytes 74969b3e4d512389462576e87cd89f5e March 19, 2015 @ 15:43
cdnjs 55316 bytes 74969b3e4d512389462576e87cd89f5e June 6, 2014 @ 17:15

And some CDNs send you a different file:
CDN Size (compressed) MD5 (uncompressed) Comment / Diff Timestamp
cdnhttps 55417 bytes 74bee2585a28c8397f79fc71093e8e58 < /*
< Copyright (c) 2004-2011, The Dojo Foundation All Rights R [...]
< Available via Academic Free License >= 2.1 OR the modifie [...]
< see: http://dojotoolkit.org/license for details
< */
<
< /*
< This is an optimized version of Dojo, built for deploymen [...]
< development. To get sources and documentation, please vis [...]
<
[...]
December 24, 2015 @ 07:33

Note: only the MD5 hashes are shown to keep things simple.

Other Versions

Available Dojo versions at minime.stephan-brumme.com:

1.13.0,
1.12.3, 1.12.2, 1.12.1,
1.11.5, 1.11.4, 1.11.3, 1.11.2, 1.11.1, 1.11.0,
1.10.9, 1.10.8, 1.10.7, 1.10.6, 1.10.5, 1.10.4, 1.10.3, 1.10.2, 1.10.1, 1.10.0,
1.9.11, 1.9.10, 1.9.9, 1.9.8, 1.9.7, 1.9.6, 1.9.5, 1.9.4, 1.9.3, 1.9.2, 1.9.1, 1.9.0,
1.8.12, 1.8.11, 1.8.10, 1.8.9, 1.8.8, 1.8.7, 1.8.6, 1.8.5, 1.8.4, 1.8.3, 1.8.2, 1.8.1, 1.8.0,
1.7.10, 1.7.9, 1.7.8, 1.7.7, 1.7.6, 1.7.5, 1.7.4, 1.7.3, 1.7.2, 1.7.1, 1.7.0,
1.6.3, 1.6.2, 1.6.1, 1.6.0,
1.5.4, 1.5.3, 1.5.2, 1.5.1, 1.5.0,
1.4.6, 1.4.5, 1.4.4, 1.4.3, 1.4.2, 1.4.1, 1.4.0,
1.3.3, 1.3.2, 1.3.1, 1.3.0,
1.2.3

The project site contains an overview how well these versions were compressed.
Other interesting projects are AngularJS, BackboneJS, Bootstrap, D3, Ember, jQuery, Knockout, lodash, React, Socket.IO, ThreeJS, UnderscoreJS and Vue.

Changelog

Best Zopfli parameters so far:
Size Improvement Parameters Found
52218 bytes -4 bytes zopfli --i100000 --mls2 --bsr7 --lazy --ohh December 3, 2015 @ 18:14
52222 bytes -1 byte zopfli --i10000 --mls2 --bsr7 --lazy --ohh November 23, 2015 @ 00:11
52223 bytes -6 bytes zopfli --i10000 --mls2 --bsr11 --lazy --ohh October 13, 2015 @ 20:22
52229 bytes -1 byte zopfli --i1000 --mls2 --bsr11 --lazy --ohh September 22, 2015 @ 18:43
52230 bytes -2 bytes zopfli --i1000 --mls2 --bsr15 --lazy --ohh September 22, 2015 @ 18:31
52232 bytes -8 bytes zopfli --i1000 --mls8 --bsr11 --lazy --ohh September 19, 2015 @ 03:30
52240 bytes zopfli --i100 --mls8 --bsr11 --lazy --ohh September 18, 2015 @ 15:29

If there are multiple parameter sets yielding the same compressed size, only the first one found is shown.

Most recent activity on June 16, 2016 @ 14:36.

Heatmaps

This Zopfli heatmap visualizes how compression changes when modifying the --bsr and --mls parameter.
Cell's contents is the best filesize achieved (in bytes, hover with mouse over cells to see number of iterations).

Good parameters are green, bad are red. The best and worst are bold as well.
The brightness of the blue background color indicates how many iterations were processed:
10,000 or 100,000.
bsr \ mls
2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768
bsr \ mls
2 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768
52289 52281 52284 52287 52289 52286 52285 52287 52308 52308 52283 52303 52306 52302 52298
52238 52233 52292 52296 52290 52281 52292 52294 52290 52286 52294 52307 52296 52292 52297
52235 52234 52242 52296 52281 52281 52286 52287 52288 52288 52295 52300 52297 52300 52295
52218 52232 52293 52298 52289 52282 52285 52286 52285 52289 52295 52306 52299 52299 52288
52230 52232 52298 52297 52283 52281 52286 52284 52282 52308 52293 52288 52296 52281 52303
52234 52234 52232 52282 52278 52282 52286 52284 52290 52293 52294 52299 52298 52293 52303
52282 52235 52300 52285 52285 52284 52285 52286 52290 52285 52293 52301 52297 52299 52301
52219 52242 52225 52289 52285 52282 52285 52282 52277 52295 52292 52297 52297 52291 52297
52220 52238 52232 52284 52283 52282 52286 52292 52283 52287 52291 52289 52299 52286 52290
52283 52232 52297 52289 52284 52282 52289 52283 52290 52288 52295 52296 52297 52289 52303
52229 52231 52231 52302 52282 52284 52287 52282 52281 52283 52292 52287 52297 52291 52306
52219 52226 52232 52297 52282 52282 52286 52283 52285 52290 52293 52307 52299 52291 52289
52235 52231 52297 52284 52284 52283 52286 52284 52287 52284 52285 52299 52296 52292 52301
52279 52232 52232 52299 52284 52283 52284 52284 52287 52302 52293 52307 52297 52286 52302
52231 52233 52300 52289 52285 52286 52288 52283 52286 52291 52291 52300 52299 52284 52296
52225 52230 52239 52291 52284 52282 52286 52283 52279 52282 52294 52297 52297 52292 52302
52224 52229 52230 52285 52280 52282 52286 52286 52287 52282 52292 52300 52299 52290 52296
52221 52227 52231 52283 52281 52283 52285 52282 52278 52300 52292 52299 52300 52304 52297
52226 52231 52235 52302 52283 52282 52284 52282 52289 52299 52299 52300 52303 52300 52295
52227 52234 52235 52285 52284 52281 52287 52283 52290 52286 52293 52299 52294 52296 52300
52226 52232 52298 52289 52283 52284 52285 52286 52289 52287 52292 52299 52297 52282 52298
52225 52229 52229 52286 52283 52282 52287 52283 52290 52300 52288 52299 52294 52294 52295
52236 52225 52231 52291 52281 52282 52286 52283 52281 52286 52294 52307 52296 52287 52297

Due to the Monte Carlo design of my search algorithm, not all parameters have reached the same number of iterations yet:
Iterations Min. Bytes Reduction Coverage
100 52237 bytes 100%
1,000 52229 bytes -8 bytes 100%
10,000 52222 bytes -7 bytes 100%
100,000 52218 bytes -4 bytes 1.45%
1,000,000
10,000,000

KZIP has far less options available for tuning/optimization. I only played around with the number of blocks (parameter -n):
Blocks Min. Bytes Compared To Best Zopfli Compared To Best KZIP
52435 bytes +217 bytes (+0.42%) +56 bytes
52379 bytes +161 bytes (+0.31%)
52395 bytes +177 bytes (+0.34%) +16 bytes
52434 bytes +216 bytes (+0.41%) +55 bytes
52468 bytes +250 bytes (+0.48%) +89 bytes
52487 bytes +269 bytes (+0.52%) +108 bytes
52498 bytes +280 bytes (+0.54%) +119 bytes
52525 bytes +307 bytes (+0.59%) +146 bytes
52533 bytes +315 bytes (+0.60%) +154 bytes

Non-DEFLATE Algorithms

Archivers based on completely different compression algorithms often produce superior results.
Unfortunately, browsers only support gzip compression at the moment.
Algorithm Program Parameters Size Compared To Best Zopfli
ZPAQ (Wikipedia) zpaq zpaq -method 69 38278 bytes -13940 bytes (-26.70%)
RAR (proprietary) RAR rar a -m5 -md64m -mc63:128t -mt1 45412 bytes -6806 bytes (-13.03%)
PPMd (Wikipedia) 7zip 7za a -mx=9 -m0=ppmd 46746 bytes -5472 bytes (-10.48%)
Brotli (Wikipedia) brotli brotli -q 11 47838 bytes -4380 bytes (-8.39%)
LZMA2 (Wikipedia) xz xz -9 49024 bytes -3194 bytes (-6.12%)
Burrows-Wheeler transform (Wikipedia) bzip2 bzip2 -9 50341 bytes -1877 bytes (-3.59%)
ZSTD (Wikipedia) zstd zstd -19 50476 bytes -1742 bytes (-3.34%)

Detailled Analysis

I wrote a DEFLATE decoder in Javascript. Click the button below to start a client-side analysis of the smallest gzipped files (may take a second):


Notes: pigz is a fast open source multi-threaded implementation of gzip written by one of the original authors of gzip.
However, when using compression level 11, pigz actually switches to the slower Zopfli algorithm and isn't multi-threaded anymore.
KrzyMOD's extensions to Zopfli offer the highest level of configuration and is therefore used for my brute-force search.
Ken Silverman wrote the closed-source KZIP compression program and Jonathon Fowler ported it to Linux.
Defluff was created by Joachim Henke; DeflOpt is a tool by Ben Jos Walbeehm.

website made by Stephan Brumme in 2015 and still improving in 2018.
all timestamps are displayed in central european time. see my changelog.
no flash, not even images or external css files - and everything squeezed into a single html file.
which was handsomely compressed before releasing it into the wild internet - obviously.

please visit my homepage and my blog, too.
email: minime (at) stephan-brumme.com