1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330
|
# sql-migrate
> SQL Schema migration tool for [Go](http://golang.org/). Based on [gorp](https://github.com/go-gorp/gorp) and [goose](https://bitbucket.org/liamstask/goose).
[](https://travis-ci.org/rubenv/sql-migrate) [](https://godoc.org/github.com/rubenv/sql-migrate)
Using [modl](https://github.com/jmoiron/modl)? Check out [modl-migrate](https://github.com/rubenv/modl-migrate).
## Features
* Usable as a CLI tool or as a library
* Supports SQLite, PostgreSQL, MySQL, MSSQL and Oracle databases (through [gorp](https://github.com/go-gorp/gorp))
* Can embed migrations into your application
* Migrations are defined with SQL for full flexibility
* Atomic migrations
* Up/down migrations to allow rollback
* Supports multiple database types in one project
* Works great with other libraries such as [sqlx](http://jmoiron.github.io/sqlx/)
## Installation
To install the library and command line program, use the following:
```bash
go get -v github.com/rubenv/sql-migrate/...
```
## Usage
### As a standalone tool
```
$ sql-migrate --help
usage: sql-migrate [--version] [--help] <command> [<args>]
Available commands are:
down Undo a database migration
new Create a new migration
redo Reapply the last migration
status Show migration status
up Migrates the database to the most recent version available
```
Each command requires a configuration file (which defaults to `dbconfig.yml`, but can be specified with the `-config` flag). This config file should specify one or more environments:
```yml
development:
dialect: sqlite3
datasource: test.db
dir: migrations/sqlite3
production:
dialect: postgres
datasource: dbname=myapp sslmode=disable
dir: migrations/postgres
table: migrations
```
(See more examples for different set ups [here](test-integration/dbconfig.yml))
Also one can obtain env variables in datasource field via `os.ExpandEnv` embedded call for the field.
This may be useful if one doesn't want to store credentials in file:
```yml
production:
dialect: postgres
datasource: host=prodhost dbname=proddb user=${DB_USER} password=${DB_PASSWORD} sslmode=required
dir: migrations
table: migrations
```
The `table` setting is optional and will default to `gorp_migrations`.
The environment that will be used can be specified with the `-env` flag (defaults to `development`).
Use the `--help` flag in combination with any of the commands to get an overview of its usage:
```
$ sql-migrate up --help
Usage: sql-migrate up [options] ...
Migrates the database to the most recent version available.
Options:
-config=dbconfig.yml Configuration file to use.
-env="development" Environment.
-limit=0 Limit the number of migrations (0 = unlimited).
-dryrun Don't apply migrations, just print them.
```
The `new` command creates a new empty migration template using the following pattern `<current time>-<name>.sql`.
The `up` command applies all available migrations. By contrast, `down` will only apply one migration by default. This behavior can be changed for both by using the `-limit` parameter.
The `redo` command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations.
Use the `status` command to see the state of the applied migrations:
```bash
$ sql-migrate status
+---------------+-----------------------------------------+
| MIGRATION | APPLIED |
+---------------+-----------------------------------------+
| 1_initial.sql | 2014-09-13 08:19:06.788354925 +0000 UTC |
| 2_record.sql | no |
+---------------+-----------------------------------------+
```
#### Running Test Integrations
You can see how to run setups for different setups by executing the `.sh` files in [test-integration](test-integration/)
```bash
# Run mysql-env.sh example (you need to be in the project root directory)
./test-integration/mysql-env.sh
```
### MySQL Caveat
If you are using MySQL, you must append `?parseTime=true` to the `datasource` configuration. For example:
```yml
production:
dialect: mysql
datasource: root@/dbname?parseTime=true
dir: migrations/mysql
table: migrations
```
See [here](https://github.com/go-sql-driver/mysql#parsetime) for more information.
### As a library
Import sql-migrate into your application:
```go
import "github.com/rubenv/sql-migrate"
```
Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later):
```go
// Hardcoded strings in memory:
migrations := &migrate.MemoryMigrationSource{
Migrations: []*migrate.Migration{
&migrate.Migration{
Id: "123",
Up: []string{"CREATE TABLE people (id int)"},
Down: []string{"DROP TABLE people"},
},
},
}
// OR: Read migrations from a folder:
migrations := &migrate.FileMigrationSource{
Dir: "db/migrations",
}
// OR: Use migrations from a packr box
migrations := &migrate.PackrMigrationSource{
Box: packr.New("migrations", "./migrations"),
}
// OR: Use migrations from bindata:
migrations := &migrate.AssetMigrationSource{
Asset: Asset,
AssetDir: AssetDir,
Dir: "migrations",
}
```
Then use the `Exec` function to upgrade your database:
```go
db, err := sql.Open("sqlite3", filename)
if err != nil {
// Handle errors!
}
n, err := migrate.Exec(db, "sqlite3", migrations, migrate.Up)
if err != nil {
// Handle errors!
}
fmt.Printf("Applied %d migrations!\n", n)
```
Note that `n` can be greater than `0` even if there is an error: any migration that succeeded will remain applied even if a later one fails.
Check [the GoDoc reference](https://godoc.org/github.com/rubenv/sql-migrate) for the full documentation.
## Writing migrations
Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations.
```sql
-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
CREATE TABLE people (id int);
-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
DROP TABLE people;
```
You can put multiple statements in each block, as long as you end them with a semicolon (`;`).
You can alternatively set up a separator string that matches an entire line by setting `sqlparse.LineSeparator`. This
can be used to imitate, for example, MS SQL Query Analyzer functionality where commands can be separated by a line with
contents of `GO`. If `sqlparse.LineSeparator` is matched, it will not be included in the resulting migration scripts.
If you have complex statements which contain semicolons, use `StatementBegin` and `StatementEnd` to indicate boundaries:
```sql
-- +migrate Up
CREATE TABLE people (id int);
-- +migrate StatementBegin
CREATE OR REPLACE FUNCTION do_something()
returns void AS $$
DECLARE
create_query text;
BEGIN
-- Do something here
END;
$$
language plpgsql;
-- +migrate StatementEnd
-- +migrate Down
DROP FUNCTION do_something();
DROP TABLE people;
```
The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename.
Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the `notransaction` option:
```sql
-- +migrate Up notransaction
CREATE UNIQUE INDEX people_unique_id_idx CONCURRENTLY ON people (id);
-- +migrate Down
DROP INDEX people_unique_id_idx;
```
## Embedding migrations with [packr](https://github.com/gobuffalo/packr)
If you like your Go applications self-contained (that is: a single binary): use [packr](https://github.com/gobuffalo/packr) to embed the migration files.
Just write your migration files as usual, as a set of SQL files in a folder.
Import the packr package into your application:
```go
import "github.com/gobuffalo/packr/v2"
```
Use the `PackrMigrationSource` in your application to find the migrations:
```go
migrations := &migrate.PackrMigrationSource{
Box: packr.New("migrations", "./migrations"),
}
```
If you already have a box and would like to use a subdirectory:
```go
migrations := &migrate.PackrMigrationSource{
Box: myBox,
Dir: "./migrations",
}
```
## Embedding migrations with [bindata](https://github.com/shuLhan/go-bindata)
As an alternative, but slightly less maintained, you can use [bindata](https://github.com/shuLhan/go-bindata) to embed the migration files.
Just write your migration files as usual, as a set of SQL files in a folder.
Then use bindata to generate a `.go` file with the migrations embedded:
```bash
go-bindata -pkg myapp -o bindata.go db/migrations/
```
The resulting `bindata.go` file will contain your migrations. Remember to regenerate your `bindata.go` file whenever you add/modify a migration (`go generate` will help here, once it arrives).
Use the `AssetMigrationSource` in your application to find the migrations:
```go
migrations := &migrate.AssetMigrationSource{
Asset: Asset,
AssetDir: AssetDir,
Dir: "db/migrations",
}
```
Both `Asset` and `AssetDir` are functions provided by bindata.
Then proceed as usual.
## Extending
Adding a new migration source means implementing `MigrationSource`.
```go
type MigrationSource interface {
FindMigrations() ([]*Migration, error)
}
```
The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the `Id` field.
## Usage with [sqlx](http://jmoiron.github.io/sqlx/)
This library is compatible with sqlx. When calling migrate just dereference the DB from your `*sqlx.DB`:
```
n, err := migrate.Exec(db.DB, "sqlite3", migrations, migrate.Up)
// ^^^ <-- Here db is a *sqlx.DB, the db.DB field is the plain sql.DB
if err != nil {
// Handle errors!
}
```
## License
This library is distributed under the [MIT](LICENSE) license.
|