light refactoring of mkimage.sh and deploy Dockerfile#341
light refactoring of mkimage.sh and deploy Dockerfile#341tbg merged 2 commits intocockroachdb:masterfrom
Conversation
build/deploy/Dockerfile
Outdated
There was a problem hiding this comment.
Apparently COPY is preferred except when you need the additional ADD semantics. See https://docs.docker.com/articles/dockerfile_best-practices/#add-or-copy. I've been meaning to make this change to the other Dockerfile as well.
There was a problem hiding this comment.
Never really took a look at this before and I'm mildly confused by what it is doing. You've mounted ${pwd}/build from the host into /build in the container and are then copying the ui and resources directories. Why? I'm going to poke around so I might end up answering the question myself.
There was a problem hiding this comment.
it's exporting the relevant build artifacts from /cockroach/ to /build/ (the latter being outside of the container). That's mostly the cockroach binary and the make testbuild artifacts. ./resources and ./ui are also needed for a functional cockroach binary, so they're in there as well (I think we should move ui into resources, by the way. Or get rid of both of them by generating test certs on the fly and embedding the stuff in ui directly in our main executable).
There was a problem hiding this comment.
Ah, you're copying out of the cockroach-dev container to a local dir and then from the local dir back into a new container. I did this in a project at Square and didn't have any weird permission issues, though I was using boot2docker on Mac OS X.
prior to this change, creating a deploy image would leave a directory `./build/deploy/build` which contains files belonging to root (created by mounting the folder into the docker container). this change simply invokes `rm` in a similar container to clean up those temporary files. additionally the process of extracting files from the dev-build has been refactored slightly to avoid unnecessary copying.
build/deploy/mkimage.sh
Outdated
There was a problem hiding this comment.
I assume this is because you're developing on linux and docker is run as root, correct?
There was a problem hiding this comment.
correct. This isn't causing you harm on OSX, right?
There was a problem hiding this comment.
No, it should be fine.
|
LGTM |
light refactoring of mkimage.sh and deploy Dockerfile
We have been seeing long startup times which disappear spontaneously. During a
restart of the beta cluster, the following goroutine was observed, which suggests
that we were spending a lot of time GCing replicas on startup.
engine ??:0 _Cfunc_DBIterNext(cockroachdb#324, cockroachdb#323, 0, 0, 0, 0, 0, 0, 0)
engine rocksdb.go:1135 (*rocksDBIterator).Next(cockroachdb#235)
storage replica_data_iter.go:104 (*ReplicaDataIterator).Next(cockroachdb#316)
storage store.go:1748 (*Store).destroyReplicaData(#109, cockroachdb#317, 0, 0)
storage store.go:841 (*Store).Start.func2(0x101b, cockroachdb#300, 0x36, 0x40, cockroachdb#301, 0x36, 0x40, cockroachdb#315, 0x3, 0x4, ...)
storage store.go:734 IterateRangeDescriptors.func1(cockroachdb#306, 0x40, 0x41, cockroachdb#307, 0x92, 0x92, cockroachdb#341, 0, 0x186c, 0x4000, ...)
engine mvcc.go:1593 MVCCIterate(cockroachdb#329, #68, #47, #81, cockroachdb#232, 0x9, 0x10, cockroachdb#233, 0xb, 0x10, ...)
storage store.go:738 IterateRangeDescriptors(cockroachdb#330, cockroachdb#196, #47, #81, cockroachdb#195, #179, #110)
storage store.go:867 (*Store).Start(#109, cockroachdb#330, cockroachdb#196, #179, #185, 0x1)
server node.go:405 (*Node).initStores(#78, cockroachdb#330, cockroachdb#196, #98, 0x1, 0x1, #179, 0, #55)
server node.go:330 (*Node).start(#78, cockroachdb#330, cockroachdb#196, #42, #129, #98, 0x1, 0x1, 0, 0, ...)
server server.go:431 (*Server).Start(#5, cockroachdb#330, cockroachdb#196, #95, 0x1)
cli start.go:368 runStart(#34, #178, 0, 0x9, 0, 0)
cobra command.go:599 (*Command).execute(#34, #177, 0x9, 0x9, #34, #177)
cobra command.go:689 (*Command).ExecuteC(#33, #70, cockroachdb#343, #72)
cobra command.go:648 (*Command).Execute(#33, #71, cockroachdb#343)
cli cli.go:96 Run(#64, 0xa, 0xa, 0, 0)
main main.go:37 main()
We have been seeing long startup times which disappear spontaneously. During a
restart of the beta cluster, the following goroutine was observed, which suggests
that we were spending a lot of time GCing replicas on startup.
engine ??:0 _Cfunc_DBIterNext(cockroachdb#324, cockroachdb#323, 0, 0, 0, 0, 0, 0, 0)
engine rocksdb.go:1135 (*rocksDBIterator).Next(cockroachdb#235)
storage replica_data_iter.go:104 (*ReplicaDataIterator).Next(cockroachdb#316)
storage store.go:1748 (*Store).destroyReplicaData(#109, cockroachdb#317, 0, 0)
storage store.go:841 (*Store).Start.func2(0x101b, cockroachdb#300, 0x36, 0x40, cockroachdb#301, 0x36, 0x40, cockroachdb#315, 0x3, 0x4, ...)
storage store.go:734 IterateRangeDescriptors.func1(cockroachdb#306, 0x40, 0x41, cockroachdb#307, 0x92, 0x92, cockroachdb#341, 0, 0x186c, 0x4000, ...)
engine mvcc.go:1593 MVCCIterate(cockroachdb#329, #68, #47, #81, cockroachdb#232, 0x9, 0x10, cockroachdb#233, 0xb, 0x10, ...)
storage store.go:738 IterateRangeDescriptors(cockroachdb#330, cockroachdb#196, #47, #81, cockroachdb#195, #179, #110)
storage store.go:867 (*Store).Start(#109, cockroachdb#330, cockroachdb#196, #179, #185, 0x1)
server node.go:405 (*Node).initStores(#78, cockroachdb#330, cockroachdb#196, #98, 0x1, 0x1, #179, 0, #55)
server node.go:330 (*Node).start(#78, cockroachdb#330, cockroachdb#196, #42, #129, #98, 0x1, 0x1, 0, 0, ...)
server server.go:431 (*Server).Start(#5, cockroachdb#330, cockroachdb#196, #95, 0x1)
cli start.go:368 runStart(#34, #178, 0, 0x9, 0, 0)
cobra command.go:599 (*Command).execute(#34, #177, 0x9, 0x9, #34, #177)
cobra command.go:689 (*Command).ExecuteC(#33, #70, cockroachdb#343, #72)
cobra command.go:648 (*Command).Execute(#33, #71, cockroachdb#343)
cli cli.go:96 Run(#64, 0xa, 0xa, 0, 0)
main main.go:37 main()
prior to this change, creating a deploy image would leave a directory
./build/deploy/buildwhich contained files belonging to root (createdby mounting the folder into the docker container).
this change invokes
rmin a similar container to clean upthose temporary files.
additionally the process of extracting files from the dev-build
has been refactored slightly to avoid unnecessary copying.