Skip to content

Conversation

@runcom
Copy link
Member

@runcom runcom commented Apr 24, 2015

close #12255
close #12151
close #12713
close #10389

Related to #12905

Signed-off-by: Antonio Murdaca me@runcom.ninja

@jessfraz
Copy link
Contributor

🎀

@runcom
Copy link
Member Author

runcom commented Apr 24, 2015

I think docs should be modified too because no more test-integration

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe Shutdown

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like Close much more

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 25, 2015

Yeah, iptables clean is fun :)
@jfrazelle What job was protected against shutdown for this?

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 25, 2015

@runcom Just noticed that you still have supervisor. Maybe will be simpler to fix bug after moving all to daemon.Shutdown.

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 25, 2015

About cleanup: you probably should create sync.Onces for api.Close() and daemon.Shutdown and just call them in defer of NewDaemon too.

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 25, 2015

@runcom Ok, bug in calling shutdowns from daemon in Trap. I think all will be cool when you'll refactor it to only api.Close in Trap.

@runcom
Copy link
Member Author

runcom commented Apr 25, 2015

@LK4D4 there are still issues..
if signal.Trap is only used for api.Close then kill -9 nor ctrl^c don't clean the daemon (nor pidfile) because only api.Close gets executed, instead eng.Shutdown was doing both close api + daemon clean
Cleaning of pidfile in docker/daemon in a defer func could work cause it suffers the same problem of daemon shutdown if only api.Close is trapped
Cleaning of driver and graph will be duplicate now because is called once if daemon could not be initialized and next if error occurs after it's initialized

I'm porting tests to integration-cli in the meantime

@runcom runcom force-pushed the say-bye-to-engine branch 4 times, most recently from 9360fb3 to 9ea81f2 Compare April 25, 2015 17:07
@LK4D4
Copy link
Contributor

LK4D4 commented Apr 25, 2015

@runcom Yeah, because Trap is sucks. Cleanup of graph and stuff on daemon initialization fail should be inside NewDaemon.
Makes sense to refactor api.Server to have Wait() to wait until all servers are closed, then cleanup for trap will be:

api.Close()
api.Wait()
daemon.Shutdown()

But for now you can use channel waitAPI or smth like this from mainDaemon.

@runcom
Copy link
Member Author

runcom commented Apr 25, 2015

@LK4D4 cool, but then how do you Trap api.Close + daemon.Shutdown? because old behaviour was trap api.Close + d.Shutdown from engine.Shutdown, we now have only a single call to signal.Trap(api.Close), could be that Trap could accepts a slice of funcs to execute also?
Also Trap is just used inside docker/daemon..

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 25, 2015

You can create closure with any functions you want.

@runcom
Copy link
Member Author

runcom commented Apr 25, 2015

/me shame (but tests green now)

@runcom runcom force-pushed the say-bye-to-engine branch 8 times, most recently from 890ba7e to 42aad7a Compare April 25, 2015 20:40
daemon/daemon.go Outdated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only closing graph if there was no error? Don't we want to close no matter what?
Nevermind, this isn't the piece of code I thought it was.

@runcom runcom force-pushed the say-bye-to-engine branch 4 times, most recently from 1661c14 to 1820616 Compare April 27, 2015 22:04
@LK4D4
Copy link
Contributor

LK4D4 commented Apr 27, 2015

LGTM

docker/daemon.go Outdated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we're just logging that the daemon needs to be force stopped?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It means that probably not all containers died in 15 seconds and we just brutally kill daemon.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But where are we killing it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in signal.Trap

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know signal.Trap has it's own bail stuff (3 sigs from the user)... but before engine was exiting after ~10s timeout period, now this timeout could probably be increased, but still seems like we should kill it after the timeout.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cpuguy83 Who should we kill? :) if mainDaemon function exits, then process just stops, nothing can stand this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So as is the daemon will just hang until someone sends 2 more kill sigs to it.

@cpuguy83
Copy link
Member

Looks really good, just one question, but otherwise LGTM.

@runcom runcom force-pushed the say-bye-to-engine branch 3 times, most recently from 3c6198c to 9f21dab Compare April 28, 2015 20:58
@LK4D4
Copy link
Contributor

LK4D4 commented Apr 28, 2015

@runcom Is it ready?

@LK4D4
Copy link
Contributor

LK4D4 commented Apr 28, 2015

@runcom Oh, requires rebase, sorry :)

@runcom runcom force-pushed the say-bye-to-engine branch 3 times, most recently from 958f68c to c0c2c7b Compare April 29, 2015 11:56
@runcom
Copy link
Member Author

runcom commented Apr 29, 2015

@LK4D4 almost, there are many tests inside integration/ that have to be ported to integration-cli/, trying to do my best and as fast as I can 😄

@runcom runcom force-pushed the say-bye-to-engine branch from c0c2c7b to 4b8cff6 Compare April 29, 2015 14:19
@estesp
Copy link
Contributor

estesp commented Apr 29, 2015

is this ready to go? @LK4D4 ?

Antonio Murdaca added 2 commits April 30, 2015 01:35
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
Signed-off-by: Antonio Murdaca <me@runcom.ninja>
@runcom runcom force-pushed the say-bye-to-engine branch from 4b8cff6 to f7e417e Compare April 29, 2015 23:45
@LK4D4
Copy link
Contributor

LK4D4 commented Apr 30, 2015

Fails looks like registry down.

@runcom
Copy link
Member Author

runcom commented Apr 30, 2015

@LK4D4 yup, need a rebuild
Now tests green

@tiborvass
Copy link
Contributor

LGTM

tiborvass added a commit that referenced this pull request Apr 30, 2015
@tiborvass tiborvass merged commit 0d0b425 into moby:master Apr 30, 2015
@runcom runcom deleted the say-bye-to-engine branch April 30, 2015 16:18
@runcom
Copy link
Member Author

runcom commented Apr 30, 2015

🎉

@tiborvass
Copy link
Contributor

Huge thanks to @runcom and everyone involved in #12151 !!

RIP engine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

8 participants