RETIRED: Setting up development/QA/staging instances


(Joshua Moskovitz) #1

Note: This article has been migrated and updated.

If you want to set up multiple instances, see:

  • This Help Center article if you have a single repository.
  • [This Help Center article]( if you are already using multiple repositories.

If you want to set up clustering, see this documentation page.

How Does Push To Developer Branch Work
Looker user attributes and `if dev` to auto-switch between dev & prod schemas
(Mike DeAngelo (a.k.a. Dr. StrangeLooker)) #2

In Looker versions 3.48 and later running with clustering there is a message queue broker running in the background to queue tasks across the cluster. By default this broker communicates with each node via port 61616. Your cluster nodes need to be able to reach each other via that port, which may involve changing some firewall rules.

You can override this port with a startup command like --queue-broker-port=<i> where i is the port you wish to use.

(John Norman) #3

Once dev ceases on production, can the models and models-user-* directories and their contents be remove?

(sam) #4

Hey @John_Norman - you definitely don’t want to delete the models directory - that has your production LookML. The models-user-* directories have dev mode LookML and so should theoretically be empty if nobody is in dev mode on the instance.

We don’t recommend deleting any of these directories - messing with them outside of the Looker UI can cause unexpected issues.

In terms of technical differences between the directories, the models directory is the production version of the projects, while the models-user-* directories are dev specific versions of the projects. If you remove the models directory then your non-dev mode will have no projects.

(Eric Meisel) #5

Wanted to share an experience here. We had a dev/prod setup on the machine level, but were unable to control releasing LookML changes to the dev instance without releasing to prod. We needed this as we had some users that were set up as QA testers under other user names (and, as such, could not view a developer’s “dev-mode” changes).

To work around this, we were able to create a fork of our repository, and configured dev to point at one, while prod pointed at the other. The standard pull request integration was in use for the dev fork, and releasing to production meant creating a 2nd pull request to merge the master branches between forks.

Looker 5 (coming soon) should allow for standard branching to work around this. But those that are stuck on an older version or are looking for ways to implement this today can follow this approach.

(Spencer White) #6

Nice Eric, glad it’s working for you. And thanks for sharing with the Looker Community!

(Dan) #7

Now that Looker 5 is out, is there a way to have a dev instance and a prod instance setup, both pointing to the same repo, but when we push something out to master on the dev instance, we then have to an extra manual step of having to do another separate pull request to get the “master” on dev (or whatever want to call that branch) merged into the master on prod? (basically we would want to be able to manually initiate the equivalent of webhook 2 in the diagram listed here: How to Setup Git with a Staging Server & Pull Requests )

The use case is the same as what Eric had, I’m just curious if there is a more straightforward way to achieve this with Looker 5 now.

(sam) #8

Hey @Dan, with Looker 5 there is now a notion of creating and switching branches within Looker’s IDE. This could allow folks to set up more of a QA/staging workflow on one instance, but it doesn’t really change how things work when you have a dev and prod instance.

Webhook 2 in that diagram should come from the Git service. You should be able to set this to automatically fire - I know with Github for example you can set a webhook to be hit anytime a commit happens. It looks like this and they have documentation on it too. Let me know if this is what you were going for!

(Arish Ojaswi) #9


How does indexing of dashboards work across the dev and prod instances?

Suppose both the instances are currently in sync. Now, if I were to create a new user-dashboard, how would I ensure that the dashboard ID is same in both the instances?

My use case: I use URL links to drill down from one dashboard to another. These links use dashboard ID to identify the target dashboard. Therefore, I would require these dashboard IDs to be consistent across instances.


(Aleksandrs Vedernikovs) #10

Hi @arishojaswi,

This is an interesting use case firstly it would be great to understand why do you need content to be created in your Dev environment. Content is not backed up by code, so it only leaves in Looker internal DB. As stated above you can make a snapshot of this backend DB and swap hook it up to your Prod instance and that would carry over all the content with it however that is quite complex process. If you use Lookml dashboards then this process becomes relatively easy as it will be backed by code.

The other possible way of doing it would be to use Gazer

dashboard import
The dashboard import DASHBAORD_FILE SPACE_ID command is used to import a dashboard and its associated looks from a file. If a dashboard or look by the same name exists in that space then the --force switch must be used.

Gazer will attempt to update the existing dashboard or look of the same name, rather than create a new dashboard or look. In that way the id, schedules, permissions, etc. will be preserved. 

This tool is open source. It is not supported by Looker’s normal support channels Issues can be logged at



(Arish Ojaswi) #11

Hi @Aleksandrs_Vederniko,

Thanks a lot for the explanation. I am currently using gazer for migrating dashboards across environments. It is much simpler that converting dashboards to Lookml and then migrating them. However, the dashboard IDs still get changed even when migrated using gazer.

Sean from your team suggested an alternative solution which seems to be a good way out (we are yet to test it). Instead of using dashboard IDs to refer to dashboards in links, we use dashboard slugs instead. Slugs are unique to each dashboard and remain constant across environments.

Thanks again.