How about webpack now?

A while ago I wrote about why shadow-cljs doesn’t use webpack. All of that still applies and shadow-cljs still works completely without webpack (or any other JS tools) and will continue to do so. The recent release of :target :bundle support in ClojureScript itself however has people asking how that affects shadow-cljs?

TL;DR: It doesn’t.

What is :target :bundle?

You should read the official guide for the complete story but in short the :bundle will produce one output .js file in a format where the JS tool of your choice will further process it and provide the JS dependencies instead of the CLJS compiler trying to do this. Most commonly that will likely be done by webpack but CLJS has no further stake in which JS tool you actually use to process this. The only thing that needs to happen is that the output bundle file is processed by something before it is being loaded. As a convenience CLJS provides a :bundle-cmd setting to allow you configure to run a custom command but you can just run it yourself as well.

As I described in my previous post that is basically the first approach I tried in shadow-cljs but then abandoned for the reasons I mentioned. It is great to see a better default story in CLJS for this but it doesn’t solve the problems I wanted to solve.

What if I want to use <my favorite JS tool>?

There might be reasons why you’d rather stick with a JS tool to provide the JS dependencies you need. The default is to let shadow-cljs do everything so you don’t have to worry about it but if you really want to you have a couple options. For example webpack provides many features that shadow-cljs does not cover.

Option #1: :target :npm-module

:npm-module was the first option I added way back when. Its intent is to compile CLJS to an output format that can directly be consumed by any other JS tool (eg. node, webpack, etc). It allows easily integrating CLJS code into an existing JS codebase. I’m terrible at naming things and I should have called it :target :commonjs since that would be more accurate.

Basically each CLJS namespace will be exposed as a separate .js file that can be directly required from JS (eg. in node without any other additional tool at all). As fas as the JS tool is concerned the code will look like any other npm library.

:npm-module however is a rather hacky way so it is really only useful in situation where you want integrate CLJS into an existing JS project.:npm-module works well enough though but it should not be used if you just have a couple npm depdencies you need to fill.

Option #2: :js-provider :external

shadow-cljs has the concept of a :js-provider built-in. This controls who is actually in charge of providing JS dependencies. For node builds this is just :require which maps all JS requires in your ns forms to regular node require calls. For :browser builds it defaults to :shadow which means shadow-cljs will provide all JS dependencies and bundle them for you. An additional :js-provider I added not too long ago is :external. It is similar to what :bundle from CLJS provides but with few different design choices.

  {:target :browser
   :output-dir "public/js"
   {:main {:init-fn}}}
   {:js-provider :external
    :external-index "target/index.js"}}}

In this example shadow-cljs will generate all the regular CLJS output and write it into the public/js directory. It will however not process any JS dependencies itself and instead just generates the additional :external-index file at the specified location. That file is just a regular JS file and will contain all the JS require calls that your CLJS code used and a bit of glue code that exposes them so that the CLJS code can find them at runtime.

Supposed you have

  (:require ["react" :as react]))

The generated index file will contain require("react") which JS tools understand. You are responsible for further processing that file and making sure that the output of that is loaded before the CLJS output.

So you could for example run

npx webpack --entry target/index.js --output public/js/libs.js

and then include the generated libs.js from webpack and the generated main.js from shadow-cljs in your HTML.

<script defer src="/js/libs.js"></script>
<script defer src="/js/main.js"></script>

This is basically an automated version of the double-bundle approach that a few people have been using for a while.

However this is different in that the output is intended to stay separate. JS code lives in one and CLJS code in the other. JS code can’t interact with the CLJS code but CLJS code can access the provided JS dependencies. This does give you a very basic code-splitting out of the box which is a good default IMHO. However as mentiond in my previous post this kind of code-splitting is very limited and not as fine grained as what :js-provider :shadow will give you. You can still use :modules for your CLJS code but your external JS file might get unwieldly large and not fit your :modules properly as JS dependencies won’t be split at all.


:bundle is a good step forward for CLJS in general as good npm support is no longer limited to shadow-cljs. Previously working with CLJSJS packages or manually setting up the “double-bundle” configs could get quite complicated and brittle in larger projects so I’m happy to see this disappear. :bundle still leaves a lot of things unsolved and we’ll see how that evolves over time.

shadow-cljs will indirectly benefit from this when more CLJS libraries move to direct npm package dependencies and away from CLJSJS packages. :bundle may even get more people to try shadow-cljs since switching between :bundle with regular CLJS and shadow-cljs should not require any code-changes whatsoever. Previously that was not the case for projects that had complicated “double-boundle” setups or used many CLJSJS packages.

Beyond that :bundle does not affect anything in shadow-cljs at all. You can just let shadow-cljs continue to do everything for you, without having to worry about setting up other tools. You always have the other options at your disposal if you really want to.

ClojureScript Macros

ClojureScript Macros are a hurdle for most CLJS beginners and wrapping your head around how they work can be quite confusing. I’ll try to cover the basics you need to know to start writing your own macros.

First of all – ClojureScript macros are written in Clojure and run during the ClojureScript compilation on the JVM. A macro is just a function that takes code (as simple data) and generates other code (again simple data) out of it. The CLJS compiler will then turn the result into JS. This is done in Clojure so that the generated JS does not need a full ClojureScript compiler at runtime.

Self-hosted CLJS is capable of compiling CLJS at runtime and does support macros but requires at least 2MB of JS so it isn’t practical for most build targets. This post will not cover anything self-host related, as that is a topic for advanced users.

Step By Step

  1. Create a CLJ namespace, eg. my.util in src/main/my/util.clj (assuming src/main is one of the :source-paths). Note that this is a .clj file to create a Clojure namespace, not ClojureScript. Define the foo macro as you’d normally do in Clojure. (ns my.util) (defmacro foo [& body] ...)
  2. Create a CLJS namespace of the same name, so src/main/my/util.cljs and add a :require-macros for itself in the ns form. (ns my.util (:require-macros [my.util]))
  3. Use it. (ns (:require [my.util :as util])) (util/foo 1 2 3) ;; :refer (foo) and (foo 1 2 3) would also work


Steps Explained

To explain why we do those steps we best go in reverse.

  • When the CLJS compiler processes the (util/foo 1 2 3) call it expands the util alias to its fully qualified (my.util/foo 1 2 3) form
  • The compiler looks up my.util/foo. Typically the compiler would only look for that var in the ClojureScript environment (so what was defined in the .cljs files). The my.util namespace however had the :require-macros for itself which tells the compiler to also look for macros of the same name
  • The my.util/foo CLJ macro is found and the compiler will expand the form using that macro
  • The CLJS compiler continues with the expanded form

The Old Way

Before CLJS-948 macros required a bit more ceremony. Thus you still see these patterns in older code. Nowadays all macros should be written as described above.

In the above example we are using the macro self-require directive (ie. :require-macros) to inform the CLJS compiler about macros that supplement a given CLJS namespace. This requires that there actually is a matching CLJS namespace, but in the past the self-require trick didn’t exist so macros sort of existed on their own.

It was common to create dedicated .macros namespaces (eg. as seen in cljs.core.async.macros). core.async provides a go macro should you can nowadays access in two ways.

The old way

  (:require-macros [cljs.core.async.macros :refer (go)])
  (:require [cljs.core.async :as async :refer (chan)]))

(go :foo)

or the modern way

  (:require [cljs.core.async :as async :refer (chan go)]))

(go :foo)

The problem with the old way was that the consumer of a library had to have special knowledge about macros and how to consume them. In the example above the user had to know that chan was a regular var and that go was a macro. In the modern way the compiler can figure this out on its own and all it took was a matching namespace with the :require-macros self-require trick. If a regular CLJS “var” has a defmacro of the same name in CLJ it will expand the macro first if applicable.

Macro Limitations

Macros can pretty much do everything that regular CLJ macros can do but since you are basically dealing with 2 separate versions of the same namespace there are some things to watch out for.

Gotcha #1: Namespace Aliases and Dependencies

Since the macros run in CLJ and not CLJS the namespace aliases you configured in CLJS will not work in the macro. It is recommended to use fully qualified names if you need to access code from other namespaces. Defining the :require in CLJS ensures that the clojure.string code will actually be available at runtime. If only the CLJ variant had that :require the CLJS compiler might not provide that namespace.

  (:require [my.util :refer (foo)]))

(foo :hello "world")

;; CLJS 
(ns my.util
  (:require-macros [my.util])
  (:require [clojure.string :as str]))

;; CLJ
(ns my.util)

;; this would fail since the CLJ namespace doesn't know about the str alias
(defmacro foo [key value]
  `{:key ~key
    :value (str/upper-case ~value)})

;; so instead use the fully qualified name
(defmacro foo [key value]
  `{:key ~key
    :value (clojure.string/upper-case ~value)})

There may be cases where there are actually matching CLJ namespace that you could require in the CLJ variant but if you want to be safe just use the fully qualified name.

Gotcha #2: Caching and :parallel-build

The ClojureScript compiler might be caching the result of the compilation so you should try to avoid side-effects in macros at all cost. The compiler might also be using multiple threads for compilation so if you use side-effects that rely on specific ordering things might get messy.

If you cannot avoid side-effects make sure to turn off caching and probably :parallel-build.

Gotcha #3: Macro-Support code

The CLJ “macro” namespace is just a regular Clojure namespace. You can use all the code you want in it during macro expansion but if your macro expands to code that needs to be called at runtime that code must be defined in the CLJS variant instead.

(ns my.util
  (:require-macros [my.util])
  (:require [clojure.string :as str]))

(defn my-helper [m] ...)

;; CLJ
(ns my.util)

(defmacro foo [key value]
  `(my-helper {:key ~key
               :value (clojure.string/upper-case ~value)}))

This will expand to (my.util/my-helper ...) which the CLJS compiler will find properly. CLJS code cannot access a defn defined in the CLJ namespace so make sure everything is where it needs to be. You can use a fully qualified symbol for my-helper but the syntax quote (ie. the ` backtick) automatically applies the current namespace (ie. my.util) to all unqualified symbols so it can be omitted here.

Gotcha #4: CLJ Macros

So far this post assumed that the macro only had to generate CLJS code and didn’t need to work in regular CLJ code. Some libraries however will want to work on both platforms so they need a way to detect whether they are supposed to expand to CLJS code or CLJ code. This can be done by inspecting the special &env variable during macro expansion. Note that you cannot do this with reader conditionals when using .cljc files.

(ns my.util)

(defmacro foo [key value]
  (if (:ns &env) ;; :ns only exists in CLJS
   `(cljs-variant ...)
   `(clj-variant ...))

Gotcha #5: CLJC

CLJC can be difficult to get right since you are defining two completely independent namespaces (CLJ+CLJS) at the same time in one file.

I strongly recommend writing your macros in 2 separate files until you feel comfortable with that. I still do it for 100% of my macros.

A lot of the macros in the wild out there written in CLJC do it wrong in some way. Most of the time this isn’t a big problem since the Closure Compiler gets rid of the code that “leaked” in the CLJS side of things. Nevertheless if you must use CLJC files you should make sure that all CLJ related code is properly behind #?(:clj ...) conditionals.

(ns my.util
  #?(:cljs (:require-macros [my.util])))

(defn my-helper [m] ...)

(defn macro-helper [k v] ...)

(defmacro foo [key value]
  `(my-helper ~(macro-helper key value))

It is somewhat easy to overlook that in this example macro-helper is actually called during macro expansion and not at runtime. The macro-helper defn however was not behind a reader conditional so it will be compiled as part of the CLJS variant and actually exist as a regular function at runtime. That doesn’t necessarily hurt anyone but depending on how much code is “leaked” that way may make compilation slower or may lead to actual additional bytes in a release build if the Closure Compiler was not able to eliminate all of it (eg. if you used defmethod for CLJ multi-method).


  • ClojureScript macros are written in Clojure and run on the JVM
  • Define macros by creating a CLJ and CLJS namespace of the same name, where the CLJS variant has a :require-macros directive in its ns
  • CLJC is harder to get right, avoid it if you can
  • Writing and using macros isn’t all that difficult 😉

Hot Reload in ClojureScript

Hot Reload is a very popular concept within the ClojureScript community and refers to the automatic reloading of code during development while keeping the application state. The concept was first introducing by Bruce Hauman and his implementation in figwheel. If you haven’t watched his introductory talk you definitely should.

shadow-cljs has its own implementation for Hot-Reloading which works similarly to what figwheel does. It does expose a few extra hooks that allow a bit more control over the process when needed but the underlying concept is very much the same.

In this post I provide a few more technical details of how it works in shadow-cljs to remove some of the mystery since the underlying process is really simple. I’m going to frame this in the context of a SPA (Single Page Application) since the concept is most useful when combined with a virtual DOM library like React which constructs the UI in a functional manner based on our data. It also is essential that all your data is kept in a central place since hot-reloading gets a lot more complex if data is spread all over the place. Even to the point of not being useful at all anymore.

Application Lifecycle

Any application using Hot Reload should be structured in a similar fashion where each lifecycle event is executed at the correct time and can potentially be re-executed when required. The setup I recommend uses 3 simple stages

  • init: Executed only once, when the application is first loaded. Calls start when done
  • start: Actually renders the application, called after hot-reload is applied
  • stop: Optional, called before any hot-reload is applied (allows shutting down transient state started in start)

In code terms this looks something like this:


(defn ^:dev/after-load start []
  (js/console.log "start"))

(defn init []
  (js/console.log "init")

;; optional
(defn ^:dev/before-load stop []
  (js/console.log "stop"))

The build config should use :modules {:app {:init-fn}} to ensure the init fn is called when the app is first loaded. The dummy just directly calls start but actual apps can initialize themselves here.

The ^:dev/after-load on start (and ^:dev/before-load on stop) metadata tells shadow-cljs to call those functions at the appropriate time in the application lifecycle.

Note that the names init/start/stop don’t actually mean anything special so you can use any name you like instead, they just happen to be the names I chose in my projects.

On the backend the running shadow-cljs watch app process will watch the filesystem for changes and automatically recompile namespaces on change. Once that compilation finishes the update is shipped to the runtime (ie. Browser) and it starts calling all ^:dev/before-load functions (can be zero or more). Once those complete the recompiled code is loaded and the ^:dev/after-load functions are called (should be one or more).

Async Lifecycle Hooks

There are also async variants of those hooks in case you need to do async work that should complete before proceeding with the reload. Suppose you are using a node http server where its .close function takes a callback that is called once the server is actually closed.

(defonce server-ref (atom nil))

(defn start-server []

(defn ^:dev/after-load start []
  (reset! server-ref (start-server)))

(defn ^:dev/before-load-async stop [done]
  (let [srv @server-ref]
    (reset! server-ref nil)
    (.close srv done)))

The async lifecycle hooks just receive on extra argument (ie. done) which is a simple callback fn that should be called once the work is done. In this case it can be passed directly to the .close fn. You must ensure this done fn is actually called though. Hot reload will otherwise not proceed.

When using the sync version of the lifecycle hooks the before-load would be called which would queue the .close but given the async nature of the node code this would actually not do anything just yet. shadow-cljs would just immediately proceed with reloading and calling after-load again. The .close never had a chance to complete and your server would now likely fail to start since the old one is still running.

Hooking up React

Note that reloading the code itself doesn’t actually do anything to your UI unless you used a ^:dev/after-load hook to actually trigger the re-render.

When using something like reagent you’d trigger the re-render like this.

(def dom-root (js/document.getElementById "app"))

(defn ui []
  [:div "hello world"])

(defn ^:dev/after-load start []
  (reagent/render [ui] dom-root))

Depending on your setup you may also need to actually ensure that reagent/react don’t decide to skip the re-render. Sometimes they may think that nothing has changed if you only changed something further down.

With frameworks like re-frame you should call (re-frame/clear-subscription-cache!). When using reagent directly you might need to add an additional prop to signal “change”, so it doesn’t immediately skip since ui didn’t change. As explained later due to the difference in the recompile logic this may not have been necessary in figwheel.

(defn ^:dev/after-load start []
  ;; dummy prop that always changes
  (reagent/render [ui {:x (js/}] dom-root))

When using a single atom to hold all your state it might be enough to just touch and change that.


(defonce app-state (atom {::dummy 0}))

  (:require [ :as db]))

(defn ^:dev/after-load start []
  (swap! db/app-state update ::dummy inc)
  (reagent/render [ui] dom-root))

Behind the Scenes

Code reloading in ClojureScript is deceptively simple compared to hot-reload mechanisms from other languages (eg. HMR in JS) because of the way the code is structured. Everything is executed in the same scope and a tree of namespace objects is created. The above code will just create this underlying JS

goog.provide(""); // creates the var example = {app: {}}; global once = function() {
  return console.log("start");
}; = function() {
}; = function() {
  return console.log("stop");

On reload this will effectively just redefine (and the others from that namespace) with new function definitions. If code from other namespaces is reloaded this way the namespaces using it will have accessed it via its fully qualified name so they will automatically call the new version.

Recompile Logic

On the backend the shadow-cljs watch app process will compile a namespace when it is changed and also recompile the direct dependents of that namespace (ie. namespaces that :require it). This is done to ensure that changes to the namespace structure are reflected properly and code isn’t using old references.

(ns example.util)

(defn foo [] ...)

  (:require [example.util :as util]))

(defn bar []

Suppose you changed example.util and renamed (defn foo [] ...) to (defn bar [] ...). If wasn’t automatically recompiled together with example.util you would not get a proper warning that example.util/foo no longer exist. Given that the old definition even still exists in the runtime the code would actually keep working until you reload your browser (or restarted the watch).

shadow-cljs will only automatically recompile the direct dependents since in theory dependencies further up cannot be directly affected by those interface changes. This is different from figwheel since it defaults to using :recompile-dependents true from the ClojureScript compiler. This will recompile ALL dependents so any namespace with a :require on the changed namespace and then all namespaces that required those and so on. This can make recompiles rather slow in larger apps and in the vast majority of cases simply isn’t necessary.

Don’t forget about the REPL

Hot Reload is sort of brute force since it always replaces the entire namespace. Depending on the size of that namespace and the amount of direct dependents this can noticeably increase the feedback time during development given the amount of time it takes to compile.

It also blindly triggers on file change which will often lead to unnecessary recompiles since you might just save a file in the midst of other pending changes. I have Cursive setup to save files when the editor loses focus so this means that a hot-reload is triggered as soon as I tab out to look up some documentation or so. This can be a bit annoying if you are in the midst of a large new feature/rewrite.

The REPL allows for much finer grained control over this entire process. When using a CLJS REPL you can replace a single function directly which will be a whole lot faster and provide feedback pretty much instantly. However if said feedback requires rendering the UI you’ll need to trigger that re-render. You could just call ( if you use the setup from above. Depending on your REPL setup you may want to bind that to a keyboard shortcut.

shadow-cljs also provides a few extra REPL hooks if you still want to the convenience of doing the whole lifecycle and proper dependent namespace checks. This will let you trigger the recompile when you want it to instead of automatically on file change.

When in a CLJ REPL you can first tell the watch to not automatically rebuild on file change via

(shadow.cljs.devtools.api/watch :app {:autobuild false})

This will cause the watch to not trigger an automatic recompile and instead only remember the files that were changed. You can then trigger an actual recompile and the hot-reload lifecycle via

;; trigger compile for specific build
(shadow.cljs.devtools.api/watch-compile! :app)
;; or all running watches

These are also great candidates for keyboard shortcuts.

With much more control over when things actually happen you can get rid of a lot of “noise” you might otherwise receive. I find myself using this whenever working on changes to my domain model or architecture and using the automatic reloading whenever I’m tweaking the UI (eg. styles and layout).

Things to avoid

Hot Reload can only do so much and requires some discipline in your code architecture.

Holding Code references in State

Hot Reload can only replace code in the global scope. If you hold onto references (eg. functions) directly in some other places those references won’t be replaced and will continue using old versions.

(defonce some-state (atom {:x somewhere/foo}))

This will keep a reference to foo directly in the map which will not be updated when hot-reloading. One way to avoid this is adding an extra function.

(defonce some-state (atom {:x #(somewhere/foo %)})) ;; assuming it expects one arg

This isn’t the best approach but works in a pinch. A better idea would be to keep only actual data in atoms and then deciding what code to call on that later in the actual code

(defonce some-state (atom {:x ::foo}))

(defn do-something [{:keys [x] :as data}]
  (case x
    (somewhere/foo data)))

(defn ^:dev/after-load start []
  (do-something @some-state))

Missing :ns require

shadow-cljs is rather strict about namespace references and things can go wrong quickly if you “cheat”. Technically CLJS allows using the fully qualified name of something to directly reference without ns alias but only the actual :require ensures things are actually available and properly reflected when hot-reloading.


(defn foo? [x]
  (some.thing/else? x "foo"))

Don’t do this. At least add (:require [some.thing]) to the ns, better yet with an alias. Without the :require shadow-cljs won’t properly recompile when some.thing is changed. It may also lead to other weird race conditions due to parallel compiles. Just don’t do this.

Invoking code directly in the namespace

Avoid calling code directly in the top level of your namespace since that will be executed every time your code is reloaded. Suppose we didn’t use the :init-fn feature shadow-cljs provides and instead used directly called init ourselves.



(defn init []
  (js/console.log "init")


In a release mode app the init would only be called once so everything would be ok but with hot-reloading the full ns will be reloaded and therefore the (init) would execute each time. :init-fn ensures this doesn’t happen and therefore ensures the code is only executed once as intended.

It may be OK for other code to be directly called in your namespace but be aware that it will get called every time your code is reloaded. Frameworks like re-frame ensure that it properly updates references it creates.

Note that an exception during the load of a namespace process may break hot-reload entirely. Avoid running code as much as possible and instead use the ^:dev/after-load hooks when needed.

Code-Splitting ClojureScript

Code Splitting has been around for a while but I feel it is somewhat underused in most ClojureScript projects and there aren’t many examples showing how you actually use it. Once your project reaches a certain size you should definitely investigate splitting your code into multiple :modules (often called “chunks” elsewhere) and delay loading code until it is actually required.

I’ll show an example using :modules in a shadow-cljs project. This uses a few new things in shadow-cljs so setup previously was a bit more complicated but the basic strategy has worked for 5+ years. It just got a nicer API, that is currently only available in shadow-cljs.

If you’d rather look at some code than read a wall of text: All the code used in this example is available here. You can find the compiled demo app here.


JavaScript is expensive. I won’t go into too much detail here since there is a much better article that covers that topic. The gist is that you should keep your JavaScript as small as possible since the initial parse/compile phase is quite expensive and can make your “app” start rather slow. It is quite easy to reach a megabyte or more of .js code in bigger projects. After gzip that may not look that bad but the engine still has to process the unzipped code. On slower devices that can take quite a long time.


Typically you’ll end up with one .js file once ClojureScript compilation and optimizations completes. That file will contain all the code your app needs and code that wasn’t used was already removed by the Closure Compiler. It is optimized and minimized to the best extent possible but not all code will be required initially. You may have different sections/pages in your app that the user may not actually visit. There may be a dialog that is rarely used but requires a lof of code since it contains a complex form or so. There are very many things that will be part of your app but used less frequently. You want to delay loading this until you actually need it.

Enter :modules. Unfortunately “module” is such an overused term so whenever you read :modules think .js files. Instead of creating just one .js file we split it into 2 or more .js files which “group” certain functionality so we can load whatever we need on demand and keep the initial payload small. How you organize your :modules is highly dependent on your application and there is no one-size fits all solution. You should spend some time tweaking this setup for your particular use-case. I’ll demonstrate how to do this using shadow-cljs.

Example App

The example will be using reagent since it is very minimal. I’ll also be using the new React.lazy helpers since Components are a very useful abstraction for code-split points already. You don’t have to use either, the concepts apply to pretty much all browser-targeted apps.

Imagine a shop webapp where a user may sign up to create a new account or sign into an existing account. There should probably be some kind of account overview showing past purchases and so on. You’ll also want a product listing of sorts and probably product detail pages.

The example will only show the bare minimum focusing on the code-splitting aspects. If you want to learn how to build actual applications using reagent I recommend and Disclaimer: I’ll get a commission if you pay for the courses.

So lets get into it. We’ll have a main namespace that will serve as the initial entry point and will always be loaded first. I’ll break it down below but I think its helpful to have the full picture first.

    ["react" :as react]

[reagent.core :as r]

[demo.env :as env]

[demo.util :refer (lazy-component)])) (defonce root-el (js/document.getElementById “root”)) (def product-detail (lazy-component demo.components.product-detail/root)) (def product-listing (lazy-component demo.components.product-listing/root)) (def sign-in (lazy-component demo.components.sign-in/root)) (def sign-up (lazy-component demo.components.sign-up/root)) (def account-overview (lazy-component demo.components.account-overview/root)) (defn welcome [] [:h1 “Welcome to my Shop!”]) (defn nav [] (let [{:keys [signed-in] :as state} @env/app-state] [:ul [:li [:a {:href “#” :on-click #(swap! env/app-state assoc :page :welcome)} “Home”]] [:li [:a {:href “#” :on-click #(swap! env/app-state assoc :page :product-listing)} “Product Listing”]] (if signed-in [:li [:a {:href “#” :on-click #(swap! env/app-state assoc :page :account-overview)} “My Account”]] [:<> [:li [:a {:href “#” :on-click #(swap! env/app-state assoc :page :sign-in)} “Sign In”]] [:li [:a {:href “#” :on-click #(swap! env/app-state assoc :page :sign-up)} “Sign Up”]] ])])) (defn root [] (let [{:keys [page] :as state} @env/app-state] [:div [:h1 “Shop Example”]

[nav {}]

[:> react/Suspense {:fallback (r/as-element [:div “Loading …”])} (case page :product-listing [:> product-listing] :product-detail [:> product-detail {}] :sign-in [:> sign-in {}] :sign-up [:> sign-up {}] :account-overview [:> account-overview {}] :welcome

[welcome {}]

[:div “Unknown page?”] )]])) (defn ^:dev/after-load start [] (r/render [root] root-el)) (defn init [] (start))

Example Component

(ns demo.components.sign-in
  (:require [demo.env :as env]))

(defn root []
   [:h1 "Sign In"]
   [:p "imagine a form ..."]
   [:button {:on-click #(swap! env/app-state assoc :signed-in true :page :account-overview)} "Sign me in already!"]])

On startup the init function will be called, since we don’t need to initialize anything in this example we’ll just proceed with start which renders the reagent root component. The root just renders components based on a the :page setting in the demo.env/app-state atom. Depending on the setting it will render some of the “lazy components” we setup. Usually you would just add a (:require [demo.components.sign-in ...]) in your ns definition and use it directly. Given how ClojureScript works that would always load the required namespace before loading

What the lazy-component utility allows is referencing something that will be declared later. We don’t actually care how it is declared just know that it will be at some point. shadow-cljs will fill in the required information on how to load the actual component via the shadow.lazy utility which we wrapped to remove some of the boilerplate code.

Using the React.lazy utility this will only actually start loading the associated code when the component is first rendered and then automatically re-render once the code finishes loading. The react/Suspense wrapper will show the :fallback while any code is loading. You can do several other things here, it really depends on your application. The important thing to remember is that we referenced something that may not be loaded yet and must be loaded asynchronously before it can be rendered.

Since does not directly require the components we can now define our actual splits using :modules in the build config.

Example Config

How you structure your :modules is completely up to you and your application at this point. I’ll show the most verbose thing first and then discuss an alternative strategy later on. Don’t fear this, the config is pretty simple it just looks a bit long. I wish it was shorter but it is this way for some important reasons which I’ll maybe explain in a later post.

  {:target :browser
   :module-loader true

    {:entries [demo.components.account-overview]
     :depends-on #{:main}}

    {:entries [demo.components.product-detail]
     :depends-on #{:main}}

    {:entries [demo.components.product-listing]
     :depends-on #{:main}}

    {:entries [demo.components.sign-in]
     :depends-on #{:main}}

    {:entries [demo.components.sign-up]
     :depends-on #{:main}}}

As explained above we will start with the :main module (becoming the main.js output) calling ( on load, it is the only module that can be loaded directly without loading any other. Then we define one module for each component and they all depend on the :main module since that will provide the common code such as reagent, react and of course cljs.core. :module-loader true just tells shadow-cljs that it needs to do a couple extra steps to allow loading the code dynamically.

After running shadow-cljs release app we end up with a bunch of .js files in our default :output-dir "public/js". This also works using watch and compile in development mode but since code-splitting is mostly relevant for release builds I’ll focus on that.

└── js
    ├── account-overview.js
    ├── main.js
    ├── product-detail.js
    ├── product-listing.js
    ├── sign-in.js
    └── sign-up.js

In the primary index.html file we just load the js/main.js file and let the code deal with loading the other files when needed. We won’t actually reference them directly ourselves although that would be perfectly fine.

<!doctype html>

    <link rel="preload" as="script" href="js/main.js">
    <title>CLJS code-splitting example</title>
<div id="root"></div>
<script async src="js/main.js"></script>

At this point it would probably be useful to look at a Build Report and inspect the output a little. This is optional and you can leave this as is and it will probably work.

However you know more about your project than shadow-cljs could ever know. You know what the most common paths users take are. Some modules may be tiny in which case it might make sense to combine them in some way. Things commonly used together should probably be grouped in the output. Since loading code requires an async step and may take some time depending the user’s network/computer it may be better to wait a bit longer on startup instead of showing a “Loading …” later on. Once React Concurrent Mode becomes available this will actually matter less but for now it is relevant.

In the example the output files are actually tiny and it doesn’t make sense to split them at all but in a real app the files would be bigger and may contain npm components that are only used in one module but not the others and so on. It wouldn’t make sense to always wait to the code to download.

So given the knowledge we have about our app it is probably save to assume that our users are always going to visit the product-listing page in combination with some visits to the product-detail. So instead of having an extra interruption while waiting for the product-detail to load we can just bundle it together with the product-listing. We can do the same for the user related stuff.

  {:target :browser
   :module-loader true

    {:entries [demo.components.account-overview
     :depends-on #{:main}}

    {:entries [demo.components.product-detail
     :depends-on #{:main}}

We don’t need to change anything in the code since it is already setup and shadow-cljs deals with the rest. It might make sense to just keep everything in the :main module and not split at all, always test for your project.


Code-Splitting is well worth the effort. It does not involve a whole lot of code and can potentially make your app load a lot faster which your users will appreciate. Don’t be intimidated by the initial extra setup and let shadow-cljs help you with the rest.

Don’t blindly split everything as it may actually make things slower. Just generate a build report, tweak the config, re-compile and repeat until you reach your ideal setup. Always remember the re-evaluate as your code may evolve over time and different splits may become more relevant.

Note that these :modules can be nested several levels deep, it doesn’t have to stay “flat” like our example. Aim to keep your :modules below a certain size but remember that it is faster to load one 100KB file instead of one 25KB file that immediately loads another 50 KB and then another 25 KB file before it is able to render anything. Looking at 3 “Loading …” spinners is no fun. Always test on slow network/device configuration. The Chrome Devtools make this incredible easy. Do not assume that everyone is on super high end networks/hardware.

Hopefully this example wasn’t too complex to follow. You can find the in the Clojurians #shadow-cljs channel if you have questions. If this was useful and you want to see more articles like this consider joining my Patreon.

PS: webpack follows a different implementation and only splits code according to the code. Figuring out how your output chunks are organized or optimizing them can be rather difficult. While the initial config may be simpler the output is harder to optimize. The Closure Compiler settled on a static configuration which we adopted. Both systems work and have different trade offs.

What shadow-cljs is and isn’t

I’ll try to properly describe what shadow-cljs actually is since there seem to be a few common misconceptions that keep coming up that are simply incorrect. This is not actually an introduction for shadow-cljs, rather a definition of why it is different compared to other tools. This assumes some familiarity with the current CLJS ecosystem. Please refer to the User’s Guide to learn more about what shadow-cljs actually does.

So first a very brief overview of the most common things and followed by that a more in depth explanation.

What shadow-cljs isn’t

  • It is not a fork of ClojureScript
  • It is not a dialect of ClojureScript
  • It does not introduce new “syntax” for require (eg. (:require ["react" :as r]))
  • It is not self-hosted

What shadow-cljs is

Short Version: shadow-cljs is a fully featured build tool for ClojureScript and JavaScript. It integrates with the npm JavaScript ecosystem and allows accessing it from ClojureScript. It runs on the JVM and uses the Closure Compiler to process JavaScript and create optimized (aka. minified) JavaScript output. It does not use self-hosted ClojureScript, meaning that it still requires Java to run.

Basically you can split the work required to perform a ClojureScript build into 4 stages:

  1. Basic Setup which just sets up the environment for later stages, applies configuration and so on.
  2. Compile ClojureScript to JavaScript one namespace at a time
  3. Organize the output
  4. Optimize the output (optional)

shadow-cljs replaces Steps 1,3,4. Step 2 remains mostly unchanged and only changes things when there is no “official” API to hook into a particular step (eg. npm namespace aliasing). The very large majority of the code in shadow-cljs actually deals with Step 3,4 which has nothing to do with ClojureScript anymore since at this point only JS code exists and no more CLJS compilation happens.

What goes into a ClojureScript build?

Before we can further define what shadow-cljs actually does we need to understand all the involved steps when it comes to actually compiling ClojureScript and running it on a given platform.

Step #1 – Basic Setup

Before any compilation can be done we need to initialize the compiler state and collect some basic information. The compiler options are validated and applied. Some information from the classpath is extracted (eg. deps.cljs) and an index of :foreign-lib is created from the information. For example if you have cljsjs/react in your dependencies that will contains a deps.cljs which sets up the cljsjs.react namespace alias and so on. No actual CLJS compilation is taking place yet.

In addition the Closure Library “index” is loaded which also just contains a simple lookup of which namespaces any given file in the Closure Library provides. Eg. the goog.object namespace is provided in the goog/object/object.js resource on the classpath.

When using :npm-deps the node_modules directory is also indexed and again a namespace index is created that may map the react namespace to node_modules/react/cjs/react.production.min.js file. This is actually a bit more complicated but what we end up with is just a mapping of namespace -> file.

Once everything is set up the compiler state is bound to the cljs.env/*compiler* atom and compilation can start.

Step #2 – Compile ClojureScript

Now we actually compile ClojureScript to JavaScript. This is done by cljs.analyzer and cljs.compiler. It involves reading the .cljs source one form at a time using tools.reader, analyzing it, expanding macros in the process and then generating the proper JavaScript (with proper source maps).

Suppose this trivial example

  "some docstring")

(defn init []
  (js/console.log "Hello World!"))

After compilation this is turned into JavaScript which follows the Closure JS format.

goog.require("cljs.core"); = function() {
  console.log("Hello World!");

One very important aspect of CLJS compilation is that all dependencies must be analyzed first. Every CLJS namespace has an implicit dependency on cljs.core and if a namespace has a (:require [something.else :as foo]) in its ns form that dependency must be analyzed before the ns itself can be analyzed. During analysis some of the data from the AST is extracted and added to the compiler state. This mostly contains metadata information about namespaces and their :defs (ie. def, defn). Greatly simplified the above code will generate something like

  :meta {:doc "some docstring"}
    :fn-var true
    :meta {...}

The data from those namespaces is used during analysis so we can warn about missing vars and generate the proper code for protocols and such. This is just data, it can be cached and written to disk and read again later.

Some dependencies will be provided by other JS files, either from the Closure Library, :foreign-libs or npm directly. All the analyzer actually needs to know here is which variable name to use when referencing code from other “namespaces”. It doesn’t actually do analysis of said JS code (yet).

So suppose we expand the example to use


[react :as r]

[react-dom :as rd [goog.dom :as dom]) (defn init [] (-> (r/createElement “h1” nil “Hello World!”) (rd/render (dom/getElementById “app”))))

goog.require("goog.object"); = function() {
  module$react_dom.render(module$react.createElement("h1", null, "Hello World!"), goog.dom.getElementById("app"));

In this case I used module$react as the alias for (:require [react :as r]). It won’t actually be that in real builds but all you need to remember is that is just a variable name. The alias mechanism may actually change depending on build options but it is a close enough approximation to assume that the indexes created in Step #1 will be used as a basis to assign these aliases. There may be a goog.require("module$react") or there may not be, it is not relevant since goog.require is basically a noop inside files.

There is a special case for :foreign-lib provided names (eg. everything cljsjs.*) in that those don’t actually create any aliases at all instead just provide a global variable you use directly. So cljsjs.react is not used via a namespace alias but js/React. This is bad for various reasons and CLJSJS started adopting the alternative approach of using proper namespace aliases for the commonly used packages (eg. react).

Once all CLJS namespaces have been compiled we can proceed with the build.

Step #3 – Bundle/Organize JavaScript

If you take the above generated output and run it in a browser you’ll get an error about goog is not defined. goog is actually provided by the goog/base.js file which provides the goog namespace and is an implicit dependency for all CLJS (and the Closure Library). So once ClojureScript compilation finishes we need to generate a few additional files and move them to the proper places and ensure that they are loaded in the proper order. Nothing that is done here is specific to ClojureScript, at this point we only have a bunch of .js files that need to be massaged into the right shape to be loadable in different environments (eg. the Browser).

Step #4 – Optimize JavaScript

This is actually optional and only done for “release” builds, it is not done during development. :optimizations :none means that this is skipped.

After the .js files are organized we still have a lot of them and they can get quite large. This is impractical when building for the Browser since it can take quite a while to load and often contains code we won’t actually use. This is where the Closure Compiler comes into play. It takes all the generated .js code and processes it with the given :optimizations setting (eg. :advanced). This will analyze all the JS code, remove the parts that aren’t used and shorten all variable names and a bunch of other really cool stuff to make the output as small as possible.

What about the REPL?

The REPL actually makes things a bit more complicated but it basically just keeps the compiler state around and repeatedly does Step #2 and then does a custom Step #3 and organizes the code into a shape that can be loaded directly in the REPL. This may happen entirely in memory and not actually involve writing any files. Step #4 is never done for the REPL and is in fact impossible to do for :advanced optimized code.

So what about shadow-cljs?

shadow-cljs replaces Step 1,3,4. Step 2 remains basically the same. A different build tool for the same ClojureScript language we all love. Compare it to lein and boot, both are still just Clojure underneath. shadow-cljs uses the same ClojureScript core library (eg. cljs.core) as any other tool would.

It interfaces directly with cljs.analyzer and cljs.compiler and does some minor modifications since there are no “official” hooks into the namespace aliasing required for npm dependencies and such. Compilation of actual CLJS is unchanged and only the parts that involve interop with npm are changed. I’m very careful about introducing new stuff and want to remain 100% compatible, any new features are opt-in and have “fallbacks”.

Unfortunately the support for :npm-deps in CLJS is rather unreliable so sometimes people mistakenly think that (:require ["foo/bar" :as x]) is something shadow-cljs specific when it is absolutely not. Strings were added since there are certain JS requires that cannot be expressed as a symbol properly. Some examples:

  • (:require ["object.assign" :as x]) Although this could be a valid symbol it references node_modules/object.assign not node_modules/object/assign and doesn’t follow CLJ(S) rules for . in symbols
  • (:require ["react-virtualized/dist/commonjs/AutoSizer" :as x]) too many /, can’t be mapped to . due to the ambiguity above
  • (:require ["@material/button" ...]) npm “scoped” packages, @ already used for deref
  • (:require [decompress-tar]) following the standard CLJ(S) naming rules would map to node_modules/decompress_tar instead of the actual node_modules/decompress-tar. JS allows -, _ in names.

JavaScript in general and especially npm does not have a proper namespacing system. Everything is just a bunch of files in a “package” which provide some sort of basic isolation. It is just files otherwise, stitched together by relative file paths.

Just like Clojure allows using any Java class we required a way to address all JS and string requires let us do that. In the beginning shadow-cljs actually added a special (:js/require ["..." :as x]) syntax to ns to deal with this but that would have been something that wasn’t supported by plain CLJS. After some discussion David Nolen actually suggested allowing strings in :require and it was thus added not too much later.

This change did not originate in shadow-cljs although it is used far more frequently here and certainly was part of the discussion. Since it doesn’t work reliably enough with :npm-deps people just kept using symbols and never adopted strings. If it were my decision I would not have allowed mapping JS names to symbols (eg. (:require [react :as r])) and instead always forced using strings for JS requires. It wasn’t my decision to make so shadow-cljs supports the symbols as well as strings. I do not like guessing if I’m using a CLJS namespace or something from JS and the ambiguities that come with it, so I just recommend using strings. One important aspect of CLJ(S) is integration with the Host and working with paths and files is just a fact of life in JS that isn’t going anywhere.

So why shadow-cljs?

Why re-implement Step 1,3,4 instead of building on top of what the official tools provide?

A rather long time ago I asked:

What exactly needs to be in CLJS?

And my opinion the answer is still Step #2. ClojureScript should focus on compiling .cljs|.cljc -> .js.

Everything else should definitely have a default implementation which CLJS provides via cljs.closure but there should be alternatives. Building on top of cljs.closure or the “official” is not flexible enough for what shadow-cljs wants to do.

Rich Hickey made the very smart decision to use the Closure Compiler which was/is fantastic. People just falsely assume that it is actually required for CLJS compilation when it actually isn’t and is only used in the optional Step #4. Nowadays it would probably be a better to emit ES6 code instead but back then that wasn’t an option since it didn’t exist.

The support for :modules is what started me on the path of shadow-cljs (then shadow-build) and it simply wasn’t an option to use the official APIs back then since it wasn’t supported. :modules support was added eventually but only after shadow-cljs had already proven that it was a good thing to have. The npm support in shadow-cljs is somewhat similar. It started as an experiment to see how practical it is.

In my opinion these experiments should not be done in ClojureScript directly.

:npm-deps sort of shows that. It started as an experiment asking: What if everything is :advanced compiled? This is a very valid question to ask and I wish it actually worked. In practice however it doesn’t work very well and probably won’t for a long time. This is not a problem with the implementation. npm is just a total wild west of competing JavaScript standards and idioms that will simply never work correctly with :advanced. So :npm-deps is probably forever limited to “it works for package a,b,c but not the rest”. The idea is still fantastic, it just didn’t work for my projects.

I wanted to try a different approach for so many things and that is what you get in shadow-cljs. Should this be the default? Definitely not. JavaScript is still evolving and keeps adding stuff constantly. Keeping up with all of that is honestly frustrating at times and probably something we want to avoid. Alternatives like CLJSJS or using webpack certainly can work and make sense in certain situations. They just don’t solve the problems I wanted to solve.

Point is that I think most of this is not actually related to ClojureScript itself. Clojure doesn’t include lein or boot. Even tools.deps is just a library. I think it is valuable to try different approaches to building ClojureScript projects, that is what shadow-cljs is about. It is different since so far it is the only tool that doesn’t build on top of the official “build” APIs. I hope there will be others. This is an area worth exploring and there is so much left to learn. The fantastic ideas Bruce Hauman had with figwheel certainly influenced a couple of things in shadow-cljs. I hope my work can have a similar impact on ClojureScript development in the long run.

In the meantime I want to provide a stable and reliable tool that works and makes your live easier when working with ClojureScript and JavaScript. You can support me on Patreon.

Why not webpack?

ClojureScript recently released version 1.10.312 and with that came the official webpack Guide for ClojureScript.

In this post I will go over the reasons why shadow-cljs does not “just use” webpack for its npm integration. webpack is certainly the most used JS build tool out there so there is some appeal to just use it instead of rolling a custom solution. Many early prototypes of shadow-cljs actually did use it but there were several limitations for which I did not find acceptable solutions.

What is ClosureJS?

First of all we need a bit a background about how the webpack world sees JavaScript and how the Closure Compiler does. ClosureJS is used pretty much exclusively throughout the Closure Library and the ClojureScript compiler generates this style of code for us. Writing this style of code by hand is pretty tedious and not many people do. I guess that is the main reason why it never caught on with the greater JS community.

The major difference to almost all other forms of JS is that everything is namespaced. goog is the base object with provide and require methods to setup a pseudo-ish namespace dependency system built on top of normal JS objects. If you say goog.provide("goog.object") it will setup a global var goog = { object: {}}; which is then used to assign properties to. The goog.require is used to sort dependencies in the proper order and to ensure they are loaded before the file that required them.

Lets see what the CLJS compiler generates for this code:

  (:require [goog.object :as obj]))

(def obj #js {:foo "bar"})

(obj/get obj "foo")
goog.require("goog.object"); = {"foo":"bar"};

goog.object.get(, "foo");

Since there is no convenient namespace aliasing in JS you always have to type out the full namespace of everything which is very inconvenient. Luckily CLJS makes this very easy so its no bother at all.

Since everything is namespaced we can freely execute everything in the global scope or concatenate all files together. This is how the Closure Compiler works as well. Everything is using a shared global scope and things can freely be moved around or renamed. goog.object.get just becomes something really short (e.g. x) and others get removed completely when not used.

What is CommonJS? UMD? AMD? ESM?

In contrast to that we have several other mechanisms for organizing JavaScript and pretty much all of them have one fundamental idea: Each file has its own scope. Each file can only see it own local variables unless it specifically imports the exports of other files. So your filesystem provides the sort of pseudo-ish namespacing system which is not explicitly repeated in the code.

// foo.js
var foo = 1;
exports.hello = function() { return foo; };

// bar.js
var foo = require("./foo");

As you can probably see this would break pretty badly if we just concatenate the files together like that. So unlike ClosureJS we need to wrap each file in a function so isolate their scope and then wire up the exports properly. In node or webpack the simplified version looks like this:

// foo.js
function(module, exports, require) {
    var foo = 1;
    exports.hello = function() { return foo; };

// bar.js
function(module, exports, require) {
    var foo = require("./foo");

The module system then adds some helper functions to ensure that the wrapped function each get their own module, exports and require arguments and that the require properly maps to the exports of others. There are several module systems like UMD and AMD or just plain CommonJS but they all work like this basically, I simplified a bit but its close enough.

What about webpack then?

webpack was built for the system above. Everything is wrapped in functions. Nothing is in the global scope. There are some fairly recent attempts to get rid of (or combine) some of the wrapping functions but for the most part this is not the norm yet.

So CLJS and Closure want everything to be global and namespaced but all we have is IIFEs (immediately-invoked function expressions). We somehow need to bridge the two systems and that is exactly what the guide is all about. You manually setup a .js file that pulls the JS dependencies you want into the global scope by explicitly assigning them to window.

import React from 'react';
import ReactDOM from 'react-dom';
window.React = React;
window.ReactDOM = ReactDOM;

This actually uses the newer EcmaScript 6 import syntax which is whole other can of worms we may get to later.

The evolution of shadow-cljs + npm

When I started with the npm support in shadow-cljs I started with exactly what the new ClojureScript guide suggests and tried to automate everything. First I would compile all ClojureScript and assign a pseudo-ish namespace for every JS require I found. Well I first had to make the JS requires declarative to get rid of actual js/require calls and global js/React uses but I’ll skip that part as CLJS supports this as well nowadays.

  (:require ["react" :refer (createElement)]))

(createElement "h1" nil "Hello World")

CLJS output


shadow$js["react"].createElement("h1", null, "Hello World");

And it then generated a index.js for webpack to process.

window.shadow$js = {

As long as the generated webpack output is loaded before the actual CLJS output everything is fine and actually worked pretty well. I had a fully automated system that enabled me to make easy use of everything on npm. I was happy with this for a while and actually close to releasing it.

Problem #1: Code-Splitting

You may have noticed the “As long as” in the previous paragraph. I am using :modules aka. code-splitting in pretty much all my :browser builds. Code Splitting in webpack works completely differently than Closure Modules so combining both proved exceptionally hard. I tried creating solutions using externals but none of them worked satisfactory. The “vendor” approach, ie. packaging all deps into one big .js file, worked fine but offended my perfectionism and pages that didn’t use certain npm deps ended up loading them regardless just because some other page did. I failed getting webpack to generate a .js file per Closure Module. This is of course not a problem if you are not using code-splitting at all. I think you can make this work but I didn’t feel like going that deep into webpack.

Problem #2: Full Interop

One of the things I absolutely wanted to work was full 100% two way interop. JS code should be able to use CLJS code and CLJS should be able to use JS. No compromise allowed.

In the naive approach both systems run independently. CLJS is compiled on its own just as the JS. Using some JS globals to glue them together without ever really knowing where those globals actually came from.

In one iteration I had a webpack “loader” that would just that would just translate require("../path/to/demo/app.js") to basically return the global created elsewhere. This looks integrated but it really isn’t since ALL CLJS code is still together and all JS code is loaded together. They still aren’t mixed.

Out of this :npm-module was born. It outputs the CLJS code in a CommonJS-ish format that webpack can understand. All files are generated into a flat directory which defaults to node_modules/shadow-cljs. Due to how webpack or node in general resolve dependencies we can conveniently import it and everything maps nicely together.

    ["react" :as react]
    ["./someComponent" :as comp]))

(defn hello []
  (react/createElement comp #js {:teh "prop"}
    (react/createElement "h1" nil "hello world")))
// demo/someComponent.js
var React = require("react");
class SuperComponent extends React.Component {...};
module.exports = SuperComponent;

// index.js
var ReactDOM = require("react-dom");
var app = require("shadow-cljs/");
ReacDOM.render(app.hello(), ...);

The CLJS code can still be optimized by Closure :advanced compilation and webpack just takes the generated output and is ultimately in charge of the final output. The CLJS code will just emit a normal require("react") which webpack then fills in later when working on the rest of the .js files. Basically there would be small chunks of optmized JS code interposed with normal CommonJS. Since the code is CommonJS compatible it also works nicely with pretty much all other JS tools out there and also node directly.

However it also meant that webpack was ultimately in charge of packaging things for the browser. It would minimize the already :advanced compiled output again since it had no understanding what it was actually consuming. Just using :none sidestepped that issue but webpack really isn’t comparable when it comes to optimizing CLJS code so the output was huge. Also our nicely optimized CLJS code was now getting wrapped in IIFEs which adds a bit of unnecessary overhead.

During development things becomes especially hard since the usual live-reload for CLJS basically become impossible since the code had to be processed by webpack first to fill in the require calls. The REPL also became pretty unreliable and couldn’t load JS deps at all. You can’t just (require '["some-package" :as x]) to try packages at the REPL when using webpack.

Back to the Hammock

Running webpack as a separate build step proved very annoying in practice (YMMV) and ultimately insufficient when it came to more complex builds or REPL development. We don’t want JS to be a second-class citizen. ClojureScript has this other :npm-deps path which ultimately wants to pass all JS code through Closure :advanced compilation. I really tried making this work and some day it might still happen but as of today it is way too unreliable and will need some serious work in the Closure Compiler and on the ClojureScript side as well. The idea is simple: Resolve all the JS deps, order them, pass them to Closure and proceed as usual.

The Closure Compiler can translate the CommonJS/ESM code into ClosureJS code but instead of wrapping everything into functions all local variables are renamed to unique names to avoid clashes. Closure will then just rename or remove them later so it doesn’t matter how long the names are in the meantime.

Using the foo.js example from above we get:

// instead of the wrapped foo.js
function(module, exports, require) {
    var foo = 1;
    exports.hello = function() { return foo; };

// we get
var foo$module$demo$foo = 1;
var module$demo$foo = {exports:{}}:
module$demo$foo.exports.hello = function() { return foo$module$demo$foo };

// bar.js
var foo$module$demo$bar = module$demo$foo.exports;

This is perfect in theory. We can consume code like that easily in CLJS since it behave like any other code from the Closure Library. In practice however things are a bit more complicated and much of the JS code on npm does not survive :advanced compilation.

How it works in shadow-cljs now

What shadow-cljs then does instead is resolving the code exactly like webpack would and passing that through Closure :simple optimizations while also wrapping them in functions to isolate them. CLJS is still processed by :advanced as usual.

Since shadow-cljs processes all of the JS it can also extract all the exported property names from the JS and automatically add them to the externs for the :advanced CLJS build. Meaning that you can get quite far without ever actually writing any externs by hand.

By directly resolving the actual files shadow-cljs can move them into the correct places to play nicely with :modules. They are still technically prepended to each Closure Module but they are executed at the correct time later on. They are not executed immediately since that doesn’t work with circular dependencies which JS unfortunately allows.

webpack will rewrite require calls to use numeric ids instead of the file names and basically ship one big JS object with id -> fn. This is a nightmare for caching since numeric ids can change if you dependencies resolve in a different order (after adding/removing deps). shadow-cljs instead keeps the pseudo-ish names that Closure would generate based on the file path.

// node_modules/lib/foo.js
shadow$provide["module$node_modules$lib$foo"] = function(module, global, process, exports, require) {
    var foo = 1;
    exports.hello = function() { return foo; };

// node_modules/lib/bar.js
shadow$provide["module$node_modules$lib$bar"] = function(module, global, process, exports, require) {
    var foo = require("module$node_modules$lib$foo");

The require function basically just looks at the shadow$provide object to get the function handle and stores the result before returning. When called again it will just return the previous result. Since the file names are stable they can easily be cached so the overhead of running through :simple is only paid once leading to far better compile performance in watch.

Note that the above is the code before :simple, after optimizations it’ll look more like

shadow$provide.module$node_modules$lib$foo = function(a,b,c,d,e) {
    d.hello = function() { return 1; };

shadow$provide.module$node_modules$lib$bar = function(a,b,c,d,e) {

Closure does some pretty wild tweaks to the code of the result is pretty much always better than what comparable JS tools achieve. It is not quite :advanced but it is still pretty good.

Note that the long pseudo-ish names are preserved in the strings but gzip takes care of most of that. I might still revisit this at some point but for now you’ll see kinda long strings in the code sometimes. Important part for caching is that the names are stable.


I consider the npm/JS integration in shadow-cljs a solved problem. For the most part you can just install any npm package and use it right away without any additional configuration or tool setup. It all just works. Everything is in place to fully switch to Closure :advanced once the support gets more stable and reliable (or the JS code gets more usable, notably strict ES6+). You won’t have to change a thing.

“Just” using webpack proved non-trivial and problematic in many cases so that path was ultimately abandoned. It is however a viable solution in “simple” builds that just have a few isolated npm deps which can be kept out of the actual CLJS build.

Unfortunately a small percentage of the JS world actually stopped writing ECMAScript and started writing WebpackJS. This means that you’ll sometimes find JS libs on npm that will require("./some.css"). shadow-cljs will just ignore these for now but that means you have to get your .css some other way which is not always easy. I hope to add support for this rather weird convention some day but the CSS processing support in shadow-cljs is still hanging out in the Hammock.

:npm-module is a solution for projects that are primarily webpack driven and just starting to introduce CLJS.

Someone more familiar with the webpack internals may be able to create something more usable for CLJS interop where I failed and I’d be very curious to see that.

Problem Solved: Source Paths

The question what exactly shadow-cljs does differently compared to other ClojureScript tools comes up every now and again.

At this point the answer is “A lot!” but that is not a very satisfying answer. The long answer however is really long and I thought I make a blog post series out of it going into a few internal details about what problems exactly I was trying to solve with certain features. These sometimes might seem like tiny little details but for me they had a big impact.

I’ll leave it up to the reader to decide whether these are actual problems. They were for me, they might not be for you. Pretty much all the features in shadow-cljs came out of personal experience when building CLJS projects over the last 5 years. YMMV.

Problem #1: Source Paths in CLJS

ClojureScript by default does not have the concept of source paths, only “inputs”. An input may either be a file or a directory. In case of a directory it is searched recursively for .clj(s|c) files which then become inputs.

The problem is that all inputs are included in all builds by default.

Suppose you want to build 2 separate things from one code base, maybe a browser client app with a complementary node.js server. For sake of simplicity I’ll trim down the code examples to an absolute minimum.

Imagine this simple file structure

├── deps.edn
├── build.clj
└── src
    └── demo
        ├── client.cljs
        └── server.cljs

The client

(ns demo.client)

(js/console.log "client")

The server

(ns demo.server)

(js/console.log "server")

The build file

(require '

( "src"
  {:output-to "out/main.js"
   :verbose true
   :target :nodejs
   :optimizations :advanced})

Compiling this and running the generated code produces

$ clj build.clj
$ node out/main.js

As expected the generated output contains ALL the code since the config does not capture which code should be included. This will not always be this obvious since not everything makes itself known like this. It is very easy to overlook files and accidentally include them in a build when you wouldn’t otherwise need them. In theory :advanced should take care of this but that does not always work.

In addition the compiler “inputs” are not known to Clojure at all. So if you want to use macros you need to include those separately via the :source-paths of lein to ensure they end up on the classpath.

Solution #1: :main

The compiler option :main solves that as it lets us select an “entry” namespace and only its dependencies will be included in the compilation.

(require '

( "src"
  {:output-to "out/main.js"
   :verbose true
   :target :nodejs
   :main 'demo.server
   :optimizations :advanced})

Recompile and we only get the desired server output. If you always remember to set this you will be safe.

Solution #2: Separate Source Paths

The more common solution is to split out the code into separate source paths from the beginning. So each “target” gets its own folder and each build will only pick the relevant folders.

├── deps.edn
├── build.clj
└── src
    └── client
        └── demo
            └── client.cljs
    └── server
        └── demo
            └── server.cljs
    └── shared
        └── demo
            └── foo.cljs

You will typically end up with an additional src/shared folder for code shared among both targets. I personally find this incredibly frustrating to work with.

I suspect that this pattern became wide-spread since :main was introduced some time after multiple source paths become a thing. I’m partially to blame for this since I was the one that added support for multiple :source-paths in lein-cljsbuild.

I’m not saying that allowing multiple :source-paths is a bad thing, there are several very valid use-cases for doing this. I only think that this pattern is overused and we already have namespaces to separate our code. I’m all for separating src/main from src/test but src/shared goes way too far IMHO.

shadow-cljs Solution

The solution in shadow-cljs is pretty straightforward.

  • shadow-cljs expects “entry” namespaces to be configured for all builds. Browser builds do this via :modules, node builds via :main. This is the default and you cannot build those targets without them
  • Multiple :source-paths are supported but sources are only taken into account when referenced
  • Multiple :source-paths are always global. You cannot configure a separate source path per build
  • :source-paths are always on the classpath

Although the implementation in shadow-cljs is entirely different it doesn’t provide anything that would not be possible with standard CLJS. I do believe that enforcing the config of “entry” namespaces however saves you from unknowingly including too much code in your builds. shadow-cljs just takes care of setting a better default so you don’t have to worry about it. You’ll see this pattern repeated in many of the shadow-cljs features.

The Many Ways to use shadow-cljs

This post on ClojureVerse prompted a whole discussion about boot vs lein again and all I can think is: Why does it matter?

Are we going to add the tools.deps clojure tool to this discussion next?

Why not just use Clojure? You can get very far with just doing that and as a bonus you can do everything from the REPL as well. I’m going to use shadow-cljs as an example here but I think it applies to a whole lot of other “tools” as well.

Build it as a Library first

First of foremost shadow-cljs is built as a normal Clojure Library. You can add the thheller/shadow-cljs artifact to any tool that is able to construct a Java Classpath for you and you can start using it.

lein run -m shadow.cljs.devtools.cli compile app
boot run -m shadow.cljs.devtools.cli compile app # does this exist yet?
clojure -m shadow.cljs.devtools.cli compile app
mvn exec:java ... # not exactly sure how this works but it works
java -cp ... clojure.main -m shadow.cljs.devtools.cli compile app
shadow-cljs compile app

The .cli namespace is just a small wrapper to process the strings we get from the command line and turn them into proper Clojure data. In the REPL you can just call the .api namespace directly (properly require‘d of course)

(shadow.cljs.devtools.api/compile :app)

Want to rsync your release version to a remote server?


[shadow.cljs.devtools.api :as shadow]

[some.lib :refer (rsync)])) (defn release [] (shadow/release :app) (rsync “some-dir/*” “[email protected]:/some-dir”))

lein run -m or shadow-cljs clj-run or ( You get the idea.

Why have a command line tool then?


You have to type less. shadow-cljs compile app vs lein run -m shadow.cljs.devtools.cli compile app.

It can also check a lot of things without an actual JVM and can provide faster feedback in those cases.


Starting a JVM+Clojure+Deps takes a while. This can be improved if the Clojure code is AOT compiled but it still won’t be very fast. Fortunately we don’t need to start a new JVM for everything, we can just re-use one we already started.

This is exactly what the shadow-cljs tool does. It will AOT compile the relevant code on the first startup so subsequent startups are faster. The shadow-cljs server starts the JVM in server mode. Every other command will then use that JVM instead of starting a new one.

This concept is not new. grench and drip come to mind or any Clojure REPL.

Here are some numbers to compare the effect this optimization has.

The command used is:

touch src/starter/browser.cljs && time shadow-cljs compile app
  • touch to force a recompile when using incremental compilation
  • Server means shadow-cljs server is running, no new JVM is started

As you can see the difference is quite dramatic. Given that I get easily distracted when waiting for things this has a huge impact on my focus during the day.

The non-Server code could be optimized a bit since it always loads all development related code. compile for example doesn’t need all the REPL/live-reload related code but given the presence of server this never seemed necessary.

You’ll most likey use watch during actual development and all tools have an optimized experience for this but it still matters for other commands.

But what about boot?

boot tries to do a lot more than just providing a classpath. The problem with this is that it only works if you want to use the exact abstractions boot provides. As soon as you want to do something slightly different it just starts getting in the way. I don’t recommend using shadow-cljs with boot since it breaks all the caching shadow-cljs tries to do. boot-cljs has the same problem as far as I can tell. Restarting the boot process wipes all cache. You could make an argument here that this is a good thing since it prevents stale cache but that just treats the symptom instead of fixing the root cause (which I did in shadow-cljs).

If you separated the classpath/pod management from boot it would make for a very good library I think. I do like some ideas in boot but its too complected for me.


Write everything as a Clojure Library so it works in any tool and the REPL.

I simplified a great deal here. Things are a lot more complicated in the real world but I am convinced that we can get way better results if less code was written specific to one build tool and instead used Clojure as the common ground.


JS Dependencies: In Practice

In my previous posts about JS Dependencies (The Problem, Going forward) I explained why and how shadow-cljs handles JS Dependencies very differently than ClojureScript does by default. To recap:

  • CLJSJS/:foreign-libs do not scale
  • Custom bundles are tedious to work with
  • Closure Compiler can’t yet reliably process a large portion of npm packages
  • shadow-cljs implements a custom JS bundler but removed :foreign-libs support in the process

Installing JS Dependencies

Almost every package on npm will explain how to install it. Those instructions now apply to shadow-cljs as well. So if a library tells you to run:

npm install the-thing

You do exactly that. Nothing more required. You may use yarn if preferred of course. Dependencies will be added to the package.json file and this will be used to manage them. If you don’t have a package.json yet run npm init.

You can use this Quick-Start template to try everything described here.

Using JS Dependencies

Most npm packages will also include some instructions on how to use the actual code. The “old” CommonJS style just has require calls which translates directly.

var react = require("react");
  (:require ["react" :as react]))

Whatever "string" parameter is used when calling require we transfer to the ns :require as-is. The :as alias is up to you. Once we have that we can use the code like any other CLJS namespace.

(react/createElement "div" nil "hello world")

This is different than what :foreign-libs/CLJSJS did before where you included the thing in the ns but then used js/Thing (or whatever global it exported) to use the code. Always use the ns form and whatever :as alias you provided. You may also use :refer and :rename if you wish.

Some packages just export a single function which you can call directly by using (:require ["thing" :as thing]) and then (thing).

More recently some packages started using ES6 import statements in their examples. Those also translate pretty much 1:1 with one slight difference related to default exports. Translating this list of examples

import defaultExport from "module-name";
import * as name from "module-name";
import { export } from "module-name";
import { export as alias } from "module-name";
import { export1 , export2 } from "module-name";
import { export1 , export2 as alias2 , [...] } from "module-name";
import defaultExport, { export [ , [...] ] } from "module-name";
import defaultExport, * as name from "module-name";
import "module-name";

becomes (all inside ns of course)

(:require ["module-name" :default defaultExport])
(:require ["module-name" :as name])
(:require ["module-name" :refer (export)])
(:require ["module-name" :rename {export alias}])
(:require ["module-name" :refer (export1 export2)])
(:require ["module-name" :refer (export1) :rename {export2 alias2}])
(:require ["module-name" :refer (export) :default defaultExport])
(:require ["module-name" :as name :default defaultExport])
(:require ["module-name"])

The :default option is currently only available in shadow-cljs, you can vote here to hopefully make it standard. You can always use :as alias and then call alias/default if you prefer to stay compatible with standard CLJS in the meantime. IMHO that just gets a bit tedious for some packages.

New Possibilities

Previously we were using bundled code, which may include code we don’t actually need. Some packages also describe ways that you can include only parts of the package leading to much less code included in your final build.

react-virtualized has one those examples:

// You can import any component you want as a named export from 'react-virtualized', eg
import { Column, Table } from 'react-virtualized'

// But if you only use a few react-virtualized components,
// And you're concerned about increasing your application's bundle size,
// You can directly import only the components you need, like so:
import AutoSizer from 'react-virtualized/dist/commonjs/AutoSizer'
import List from 'react-virtualized/dist/commonjs/List'

This we can also translate easily

;; all
(:require ["react-virtualized" :refer (Column Table)])
;; one by one
(:require ["react-virtualized/dist/commonjs/AutoSizer" :default virtual-auto-sizer])
(:require ["react-virtualized/dist/commonjs/List" :default virtual-list])

Resolving JS Dependencies

By default shadow-cljs will resolve all (:require ["thing" :as x]) requires following the npm convention. This means it will look at <project>/node_modules/thing/... and follow the code along there. To customize how this works shadow-cljs exposes a :resolve config option that lets you override how things are resolved.

Using a CDN

Say you already have React included in your page via a CDN. You could just start using js/React again but we stopped doing that for a good reason. Instead you continue to use (:require ["react" :as react]) but configure how "react" resolves like this in your shadow-cljs.edn config for your build

  {:target :browser
   {:resolve {"react" {:target :global
                       :global "React"}}}}

  {:target :node-script

The :app build will now use the global React instance while the :server build continues using the "react" npm package. No need to fiddle with the code to make this work.

Redirecting “require”

Some packages provide multiple “dist” files and sometimes the default one described doesn’t quite work in shadow-cljs. One good example for this is "d3". Their default "main" points to "build/d3.node.js" but that is not what we want when working with the browser. Their ES6 code runs into a bug in the Closure Compiler, so we can’t use that. Instead we just redirect the require to some other require.

{:resolve {"d3" {:target :npm
                 :require "d3/build/d3.js"}}}

You could just (:require ["d3/build/d3.js" :as d3]) directly as well if you only care about the Browser.

Using local Files

You may also use :resolve to directly map to files in your project.

{:resolve {"my-thing" {:target :file
                       :file "path/to/file.js"}}}

The :file is always relative to the project directory. The included file may use require or import/export and those will be followed and included properly as well.

Note that this method should only be used when you are trying to replace actual npm packages. To include local JS files you wrote you should be using the newer method.

Migrating cljsjs.*

Many CLJS libraries are still using CLJSJS packages and they would break with shadow-cljs since that no longer supports :foreign-libs. I have a clear migration path for this and it just requires one shim file that maps the cljsjs.thing backs to its original npm package and exposes the expected global variable.

For react this requires a file like src/cljsjs/react.cljs:

(ns cljsjs.react
  (:require ["react" :as react]
            ["create-react-class" :as crc]))

(js/goog.object.set react "createClass" crc)
(js/goog.exportSymbol "React" react)

Since this would be tedious for everyone to do manually I created the shadow-cljsjs library which provides just that. It does not include every package but I’ll keep adding them and contributions are very welcome as well.

It only provides the shim files though, you’ll still need to npm install the actual packages yourself.

What to do when things don’t work?

Since the JS world is still evolving rapidly and not everyone is using the same way to write and distribute code there are some things shadow-cljs cannot work around automatically and either requires custom :resolve configs. There may also be bugs, this is all very new after all.

Please report any packages that don’t work as expected. #shadow-cljs is also a good place to find me.

Discuss on :clojureverse.

Improved Externs Inference

In my previous post I talked about Externs and the options shadow-cljs provides to help deal with them. I sort of skipped over the whole Externs Inference subject since I didn’t feel it was “ready” yet. It worked reasonably well but also generated a whole bunch of warnings that weren’t actually issues (eg. any deftype, defrecord, …). It also was way more ambitious in trying to actually generate Typed Externs.

Typed or Untyped?

I took the work Maria Geller and David Nolen had done as a starting point but decided against using it since I think we can make this process a whole lot easier without sacrificing anything.

The Closure Compiler acts as a type checker and the Closure Library is fully typed, so naturally every carefully crafted externs file you find in the wild is also fully typed. This is great if all your code is typed and the externs code just flows through. However ClojureScript is not typed, therefore you gain nothing by using typed externs.

Whenever the Closure Compiler cannot determine the type of something it will do the “safe” thing and neither remove or rename a property if it is defined anywhere in the externs, regardless of type. For CLJS code this will be what happens 99% of the time.

Untyped FTW!

A lot of CLJSJS packages already cheated and used generated externs that are sort of half-in/half-out. The trouble with that is that it may list too many properties and may actually keep more code alive than required. In the very least it will stop the Closure Compiler from renaming a few things that it might otherwise rename. We only need externs for the code we actually call from CLJS, not everything the JS code uses.

Luckily for us the impact of too many externs is rather minuscule and something we could tweak later if we wanted. It is a much better experience to not run into the dreaded ln.S is not a function errors and the effect on code size is surprisingly small. 3KB gain with generated externs over manually written externs for a 600KB .js file. I think that is very acceptable already but we can get that do almost zero difference by using :infer-externs to actually only generate the things we need.

How does it work?

The official Externs Inference Guide has a good overview and pretty much all of it still applies. However as explained above we do not need to worry about the type.

As the example explains this code

(set! *warn-on-infer* true)

(defn wrap-baz [x]
  (.baz x))

would produce this warning

------ WARNING #1 --------------------------------------------------------------
 File: ~/project/src/demo/thing.cljs:23:3
  19 |
  20 | (set! *warn-on-infer* true)
  21 |
  22 | (defn wrap-baz [x]
  23 |   (.baz x))
 Cannot infer target type in expression (. x baz)

The guide tells you to add this tag metadata

(defn wrap-baz [^js/Foo.Bar x]
  (.baz x))

which we can simplify down to just ^js.

(defn wrap-baz [^js x]
  (.baz x))

By using the ^js tag metadata we are just telling the compiler that this is a native JS value and we are going to need externs for it. In shadow-cljs the /Foo.Bar type info is optional. You can still use it if you like but shadow-cljs won’t bother you if you don’t.

I also did a whole sweep through the compiler to get rid of some warnings we don’t actually need externs for. It will now also account for all properties that are actually defined in externs and won’t warn about those.

There are still a few cases left where the compiler might produce a warning when in fact no externs are needed. This will happen if you do native interop on either CLJS or Closure JS objects. We do not need externs here and you can get rid of the warning by using either the ^clj tag for CLJS or the ^goog tag. If you know the actual type of the thing you can also use that instead.

How to use it

By default you still need to (set! *warn-on-infer* true) to actually get warnings about Externs Inference for each individual file.

Since that is a bit tedious to do for every file in your project I introduced a new :compiler-options {:infer-externs :auto} setting (in addition to just true|false). This will turn on the Extern Inference warnings for your files, ie. not files in .jars.

You can try :all but that is crazy talk. There are still about 200 warnings in cljs.core but we don’t need externs for any of them, so yeah do not use :all. Only if you really want to see a whole lot of warnings, I got to over 1200 before all the tweaks. 😉

Also make sure you are using at least [email protected].


This is not official work. It is an experiment. It is based on my experience with the Closure Compiler and mostly based on some more or less educated guesses. The whole subject is not documented very well and I mostly went through a whole bunch of Java Sources to figure it out. It works well for me but YMMV. It definitely could use a few more real world examples to test this on.

If everything works out the way I hope we can make this official and part of the CLJS itself. Until then it is shadow-cljs only.

I hang out in the Clojurians Slack if you have questions. Feel free to open a Github Issue if you run into trouble as well.