Best practice/suggestions on QA in Looker


(Matt B.) #1

We have numerous developers at different levels (LookML, Looks, Dashboards). What is the best practice or recommended process for validating that all development is sound?

LookML - Seems like using the “Pull Requests Recommended” or “Pull Requests Required” option is best practice?

Looks/Dashboards - The “Content Validator” is great for validating that the Looker internals are sound (e.g. using existing dimensions/measures). I’m looking to do QA on the visualization accurately depicting the analytic it claims to be displaying. A developer has the ability to create a Look/Dashboard and not have a second set of eyes on it. Is there a way to proactively be notified about newly created Looks/Dashboards, or a link to see the most recently created ones?


(Arkadi Tereshenkov) #2

We have created a command line utility, that is using backend APIs to pull the Looks/Dashboards definitions and run the queries, defined in them to validate that the all visual objects can be rendered w/o any errors from Looker or database. Also we validate explores the same way.
The next step would be to create a sample content database. Export query results and then compare them with the results during regression testing.