We use Docker for our development environments because it helps us adhere to our commitment to excellence. It ensures an identical development platform across the team while also achieving parity with the production environment. These efficiency gains (among others we’ll share in an ongoing Docker series) over traditional development methods enable us to spend less time on setup and more time building amazing things.
Part of our workflow includes a mechanism to establish and update the seed database, which we use to load near-real-time production content to our development environments as well as our automated testing infrastructure. We’ve found it’s best to have real data throughout the development process rather than using stale or dummy data, which runs the risk of encountering unexpected issues toward the end of a project. One efficiency boon we’ve recently implemented and are excited to share is a technique that dramatically speeds up database imports, especially large ones. This is a big win for us since we often import large databases multiple times a day on a project. In this post, we’ll look at:
- How much faster data volume imports are compared to traditional database dumps piped to
mysql
- How to set up a data volume import with your Drupal Docker stack
- How to tie in this process with your local and continuous integration environments