Skip to content

Any ideas for scaling?#53

Merged
sheeep merged 7 commits into1up-lab:release-1.0from
mitom:release-1.0
Oct 14, 2013
Merged

Any ideas for scaling?#53
sheeep merged 7 commits into1up-lab:release-1.0from
mitom:release-1.0

Conversation

@mitom
Copy link
Copy Markdown
Contributor

@mitom mitom commented Oct 9, 2013

I was wondering if you have any idea about putting behind a load-balancer. (Given that we avoid trying to route requests from the same client to the same machine)

What I am wondering about is chunked uploads would only work if the temporary place where the chunks (chunk since one of our previous discussion) is stored in a shared place.
But then again, it would mean the latest-chunk is uploaded to the server, the remote file read, the latest-chunk written to it, and saved to the shared place again so the next request can access it from a possibly different server and then moved to it's final place. Sounds pretty un-effective to me.

Do you have any ideas how we could improve this, or if my I miss something?

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Sep 29, 2013

Hi @mitom

Thats a pretty good question, and I must admit that so far, I haven't thought about this issue.

What I am wondering about is chunked uploads would only work if the temporary place where the chunks is stored in a shared place.

This is exactly what I was thinking. Gaufrette is quite helpful considering this use case.

But then again, it would mean the latest-chunk is uploaded to the server, the remote file read, the latest-chunk written to it, and saved to the shared place again so the next request can access it from a possibly different server and then moved to it's final place. Sounds pretty un-effective to me.

Well, even if it is un-effective, it sounds like a reasonable way to go. Assumed that Gaufrette is able to move files directly between two filesystems. And actually I'm pretty sure someone told me once that this is not the case. If this is not given, the chunk directory must be on the same connected filesystem like the final place for uploaded files.

Another solution - even if it seems more like a hack for me - would be to maintain a list of chunks on some kind of shared service along with their dedicated server (and of course offset and other metadata). This could be a Redis instance, memcache or the like. I'm not a big fan of this because it sounds like it would be pretty prone to failure.

I'll take a look at Gaufrettes source code the other day. But maybe their are other possible solutions? I'm open for everything.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Sep 29, 2013

Assumed that Gaufrette is able to move files directly between two filesystems.

If I remember correctly it does not move files, only reads them and writes them so moving things with gaufrette is a bit overhead-ish. However I am not certain in this, I'll also take a look at the source tomorrow to see if there are other options

On the other hand, we could use the same method for writing the chunks as you used for uploading them. As in simply reading the current chunk, and appending it to the end of the already assembled chunks-file (we should agree on naming these things for clarity..) in StreamMode. I am not sure how well would that work for other backends, say S3, but StreamMode is handled by Gaufrette, so I think it should work? Also the next chunk should only start uploading if the previous one has finished (having sent a response), therefore we wouldn't have to worry about the order.

The only problem would be if someone used an uploader that uploads multiple parts at once - I have no idea for this scenario, I think the assembling of chunks at once already makes this an unefficient way, but still..

I think introducing a cache to store the location for the chunks adds a lot of dependencies - including having to maintain the database itself for it, so I would agree to avoid that if there are other ways.

At the moment I have no other ideas, but I'll give it another thought tomorrow.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Sep 30, 2013

I am not sure how well would that work for other backends, say S3, but StreamMode is handled by Gaufrette, so I think it should work?

This one I can already answer for myself, only the Local backend implements StreamFactory otherwise InMemory adapter is used which would mean moving the file forth and back and into memory for every chunk. I guess that wouldn't work with bigger files.

On the other hand if it were a Local filesystem, which then again could mean mounted directories appending the chunks to them seems like a nice way to me. It could also mean that even the uploading of files could be spread among individual requests, as we could check if the Gaufrette filesystem is the same for the temporary location and the final location, and then a simple rename which is virtually no time would finish the upload. It would also cause an other very handy improvement. I don't like to have my application servers with more hard drive storage than necessary, and having files assembled there means that they all should have relatively big space available. Moving the files per chunk would mean the required space would be dramatically smaller, and easy to calculate causing this bundle to suddenly start saving money.

The above is obviously the most optimal case, if someone would store there files on some other backend the rename way wouldn't work, and the uploading would have to be done in the Gaufrette read/write way. I could only find one reference that would state it can't move files, but I have not seen anything in the code that would imply it can, or that it should be able to.

So all in one actually if someone would be storing their files on e.g. S3, this would be quite the overhead for them I guess, but I still see no other way around it other than solving it on the load-balancing level, which I think could be risky. Still I think you also agree this behaviour should be configurable, if they disable it and go with an other way, the changes wouldn't affect them.

An other option that came to my mind, which might fall out of the scope of this bundle and would only improve things for those with other than Local backends is that the tmp backend could be also used as a web-server too, and upload instead of actually uploading could just give them information of the file, such as the key and so. Maybe the upload should rather call a handler as the validation does (except using only one) and if there is no handler fall back to the default? Using that one could create a background job that would copy over the file to the desired location, looked it up in the database, changed the location for it and then removed it from temp.

Let me know your thoughts on these, or if you had a better idea?

@sheeep sheeep mentioned this pull request Oct 8, 2013
7 tasks
@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 8, 2013

Any remarks on my propositions? I'd be happy to help you out (though I am quite busy these days). Let me know your opinion, then I'll try to find some time for it and make a PR on the gaufrette tmp part.

The other one is less important to me, but if I get more free time I would also be happy to work on that too.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 8, 2013

Hi @mitom

I'm sorry it took long, but I'm unfortunately quite busy too.

Apperently a load balancer can be configured to be session sticky. This would be the simplest and unfortunately the worst solution. You mentioned that in your initial comment:

Given that we avoid trying to route requests from the same client to the same machine

I think you correctly summarized the problem (as well as the pitfalls) of using Gaufrette for the temporary chunk directory. There is however a limitation you already mentioned. Using Gaufrette with two different storage layers (one for chunk directory and one for the final storage) is a terribly bad solution, as it would suck up way too much memory/time. I'm not sure if this should be prevented, or just mentioned in the documentation, because I think someone should be free to hazard the consequences.

as we could check if the Gaufrette filesystem is the same for the temporary location and the final location

Is this really possible? I guess the only thing we could check is if the provided services matches. Should be enough however. :)

So all in one actually if someone would be storing their files on e.g. S3, this would be quite the overhead for them I guess, but I still see no other way around it other than solving it on the load-balancing level, which I think could be risky.

Recently I came accross an idea regarding this issue. Someone mentioned that uploading the final file to a backend like S3 should take place asynchronously. Otherwise it could happen that a client faces a timeout after having uploaded the last chunk. (After which the actual move to the final destination takes place.)

How about introducing a configurable option, if the fully uploaded files should be moved asynchronously, performed by a command (run a cronjob or the like). This way, a user could still choose to upload the chunks to a Local backend and move it to his S3/whatever afterwards. But this way the user won't be able return a file path to the frontend to immediately address the file. Actually this option is something that does not fully belong to this issue. This is why I made it a seperate point in the list of the Release-1.0 pull request.

To summarize the above:

  • Let a user have a Gaufrette storage backend for the chunked upload directory.
  • Make it dead clear in the documentation that it is a not-so-smart (not to say stupid) idea to use an external storage if the application should handle big files.
  • Create a possibility to move fully uploaded files asynchronously.

Is this more or less what you meant?

Thanks again for putting such effort in this!

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 8, 2013

Someone mentioned that uploading the final file to a backend like S3 should take place asynchronously.

I think that was me in my previous comment, only I had a Resque job in mind, but you are right, this should be a separate issue^^

Using that one could create a background job that would copy over the file to the desired location, looked it up in the database, changed the location for it and then removed it from temp.

You summed up my proposition correctly, I'll start working on this in a couple of days.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 8, 2013

You summed up my proposition correctly, I'll start working on this in a couple of days.

Ha, sorry, I guess I missed the last paragraph. :)

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 9, 2013

Definitely not merge ready PR, just want to see your opinion @sheeep if it's going in the right direction.

edit oh and forgot to say, regardless of all the possible disasters it can cause at the moment and the quickfixes, it works. ^^

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 9, 2013

Whoa. @mitom

Thanks for that PR! Seems to be awesome, just some little things I stumbled upon. But you have to tell me: What kind of sorcery is that? How did you manage it to transform this issue to a pull request? =D

I'll take a more thorough look at it tomorrow morning.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 10, 2013

@mitom

This is an extremely sophisticated pull request, I'd love to merge it!
Just some small things that I saw while digging through:

  • The tests are currently failing. I think the method getName should be renamed to getFilename.
  • There are some minor coding style flaws. You can use php-cs-fixer to fix them automatically.
  • For a few things it would be nice to have test coverage. (Chunked uploads with Gaufrette filesystem, new File classes, changed validators..)

As you seems to know what you are doing by contributing to projects on Github: Do you know if it is possible for me to push into this pull request? Or should I open a pull request on your branch for that?

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 10, 2013

I'd love to merge it!

As I said I didn't mean this to be merged yet, so please don't even think about it until I claim it ready from my side.

What kind of sorcery is that? How did you manage it to transform this issue to a pull request? =D

I ask github's api to do it for me as described here. I usually curl, as the second answer says.

Do you know if it is possible for me to push into this pull request? Or should I open a pull request on your branch for that?

You should PR to my branch and after I merge it would appear here, but if you don't mind waiting a bit, please don't start working on it until tomorrow. I intend to refactor this a bit more today and fix things up. Depending on what did you intend to do it might make it hard to merge them.

The tests are currently failing. I think the method getName should be renamed to getFilename.
There are some minor coding style flaws. You can use php-cs-fixer to fix them automatically.
For a few things it would be nice to have test coverage. (Chunked uploads with Gaufrette filesystem, new File classes, changed validators..)

Sorry I didn't bother with the tests, it was late when I finished making it work. I was intending to also write tests (and will do so when I finish refactoring), simply didn't get time and energy for it, that's why I stated that it was not to be merged yet. I opened the PR so you could take a look if the direction is good.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 10, 2013

@sheeep I've finished refactoring I think, but it is still not merge ready. There are 4 things that remain to do:

  1. take a look at the orphanage, I realised I completely forgot about it, I am not sure if it still works or not, however tests pass, which I guess means it works for the local storage.
  2. write tests for the new classes <- I hate this, but will do eventually.
  3. extend the documentation about the changes and upgrade notes
  4. Make sure the theories are valid in practice. I think the methods in the file wrappers (probably only GaufretteFile) are somewhat un-safe to use, but couldn't really come up with better ones at the time, imprevements and suggestions are welcome.

In the end this PR also includes several fixes for the scrutinizer

Whatever you wanted to PR about is now welcome, especially if you wanted to write tests :P

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 10, 2013

Ha!! Look at that!! A passing version!!

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 10, 2013

Wow, quite amazing work again. I just scanned it briefly and will have a deeper look at it tomorrow morning.

it has a dependency on symfony/finder >= 2.2.0, there is no need to see failing tests which are not supposed to pass.

I never though about this, but if Symfony 2.1 is not able to resolve symfony/finder >= 2.2.0 we should also set the dependency of symfony/symfony to >2.1. End of Life for 2.1 is 11/2013. Probably release-1.0 of this bundle will be ready in 1-2 weeks time, so this would be ok for me.

Whatever you wanted to PR about is now welcome, especially if you wanted to write tests :P

Ahh, writing test cases. My secret hobby =D To be honest, a good idea, because it forces me to get a deep understanding of the changes.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 10, 2013

Ha!! Look at that!! A passing version!!

Aaaamazing! ^^

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 10, 2013

Ahh, writing test cases. My secret hobby =D To be honest, a good idea, because it forces me to get a deep understanding of the changes.

We can do that together actually, honestly I had more trouble understanding how the tests work than the functionality itself ^^ I'll try and write some docs tomorrow about the new configs and their effects to make it a bit easier to you, but I guess that's the trivial part. I probably also won't make big commits anymore, unless the orphanage is broken heavily, so things should be easy to merge.

I would also welcome your insight on the changes as there are quite a few mission critical things in there, and I did these when I took a break from my job, so no guarantee that my logic is spot on.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 11, 2013

Hi @mitom

A few questions arised.

  • I don't really get the Uploader\Storage\OrphanageStorage class. It is apparently the only class that implements Uploader\Storage\OrphanageStorageInterface and extends FilesystemStorage. And there is Uploader\Chunk\Storage\(Filesystem|Gaufrette)Storage. Ah, I think I get it. These are two different types of "Storages". We should probably rename Uploader\Storage\OrphanageStorage. Its more like a manager, isn't it?
    Ha, but wait. There is already an Uploader\Orphanage\OrphanageManager. Could we merge those two together? I'm quite a bit confused right now.
  • mitom@85dd84a: Why this?

Okay, I checked all your changes, but haven't tested it yet. More feedback is yet to come. :)

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 11, 2013

Good morning @sheeep

I don't really get the Uploader\Storage\OrphanageStorage class

Yeah, it should be FilesystemOrphanageStorage, and there should be a GaufretteOrphanageStorage too, at the moment the oprhanage only works for the local storage - as it was before. And yeah I also agree that the OrphanageManager should be merged with the storages, as in the functionality moved to them (pretty much the same thing as with the ChunkManager). I haven't put much thought into the Orphanage yet.

mitom/OneupUploaderBundle@85dd84a Why this?

I had trouble with the interfaces there... The truth is it is a bit faulty at design now, as FilesystemStorage should only get FilesystemFiles, while GaufretteStorage can handle both types. It is somewhat ensured from the configuration that they work like that. I removed the limitation first, because it didn't seem right to type hint invalidly, then since I had trouble at the tests tracking down an error, I put them back in mitom/OneupUploaderBundle@7884373, still better than nothing. If FilesystemStorage somehow gets a GaufretteFile it will throw an exception when it calls move on it.

Also, the storages should implement StorageInterface for the controllers, so they can't rely on different interfaces without an even more confusing structure.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 11, 2013

@sheeep I've written some docs, they won't be much of help, but at least they are there.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 11, 2013

Okay @sheep, I've also written tests and extended docs too (raining here ☔ ). They went surprisingly easy, I didn't realise my changes were so compatible with the existing things. So what this pull requests does:

  1. Enable usage of a stream capable file system as chunk storage
  2. Enable using the same filesystem as orphange
  3. Takes a performance-first mindset in options and settings
  4. Removes symfony 2.1 from travis testing
  5. Fixes several scrutinizer issues (all but 2)

I think I'm done with what I wanted to do. Feedback, questions and suggestions/changes are still welcome. Please make sure you go through everything before you merge it though.

At last, I'd like to thank you for this bundle, and your efforts on my suggestions and propositions.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 11, 2013

I've also written tests and extended docs too (raining here ☔ )

Must be a terrible weather. ^^

Removes symfony 2.1 from travis testing

What do you think. Should we also drop general support for Symfony 2.1? See #53 (comment).

I think I'm done with what I wanted to do. Feedback, questions and suggestions/changes are still welcome. Please make sure you go through everything before you merge it though.

It's a fairly large PR. I need some time to fully look through, but I'm optimistic that this is merge-ready until Sunday. Thanks for all your contributions and effort!

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 11, 2013

Hmm. Strage thing happing on my local machine. The travis build of mitom@71d6118 seems to pass. However, locally I get 2 errors. Any idea what this could be?

// edit: composer updated, latest commit pulled

2muchair ~/dev/mitom/OneupUploaderBundle 
⎈ ⇢ git branch
  master
* release-1.0
2muchair ~/dev/mitom/OneupUploaderBundle 
⎈ ⇢ phpunit
PHPUnit 3.7.27 by Sebastian Bergmann.

Configuration read from /Users/jimschmid/dev/mitom/OneupUploaderBundle/phpunit.xml.dist

.............S...................S..................S..........  63 / 192 ( 32%)
.........S........................S..................S......... 126 / 192 ( 65%)
.........S..................................................SEE 189 / 192 ( 98%)
...

Time: 3.99 seconds, Memory: 54.00Mb

There were 2 errors:

1) Oneup\UploaderBundle\Tests\Uploader\Storage\GaufretteOrphanageStorageTest::testUpload
Gaufrette\Exception\FileNotFound: The file "/private/chunks/uploadernlsdTU" was not found.

/Users/jimschmid/dev/mitom/OneupUploaderBundle/vendor/knplabs/gaufrette/src/Gaufrette/Filesystem.php:274
/Users/jimschmid/dev/mitom/OneupUploaderBundle/vendor/knplabs/gaufrette/src/Gaufrette/Filesystem.php:62
/Users/jimschmid/dev/mitom/OneupUploaderBundle/Uploader/Storage/GaufretteStorage.php:32
/Users/jimschmid/dev/mitom/OneupUploaderBundle/Uploader/Storage/GaufretteOrphanageStorage.php:45
/Users/jimschmid/dev/mitom/OneupUploaderBundle/Tests/Uploader/Storage/GaufretteOrphanageStorageTest.php:69
/usr/local/php5-5.5.0-20130621-132030/lib/php/PHPUnit/TextUI/Command.php:176
/usr/local/php5-5.5.0-20130621-132030/lib/php/PHPUnit/TextUI/Command.php:129

2) Oneup\UploaderBundle\Tests\Uploader\Storage\GaufretteOrphanageStorageTest::testUploadAndFetching
Gaufrette\Exception\FileNotFound: The file "/private/chunks/uploader6nSkvi" was not found.

/Users/jimschmid/dev/mitom/OneupUploaderBundle/vendor/knplabs/gaufrette/src/Gaufrette/Filesystem.php:274
/Users/jimschmid/dev/mitom/OneupUploaderBundle/vendor/knplabs/gaufrette/src/Gaufrette/Filesystem.php:62
/Users/jimschmid/dev/mitom/OneupUploaderBundle/Uploader/Storage/GaufretteStorage.php:32
/Users/jimschmid/dev/mitom/OneupUploaderBundle/Uploader/Storage/GaufretteOrphanageStorage.php:45
/Users/jimschmid/dev/mitom/OneupUploaderBundle/Tests/Uploader/Storage/GaufretteOrphanageStorageTest.php:85
/usr/local/php5-5.5.0-20130621-132030/lib/php/PHPUnit/TextUI/Command.php:176
/usr/local/php5-5.5.0-20130621-132030/lib/php/PHPUnit/TextUI/Command.php:129

FAILURES!
Tests: 192, Assertions: 521, Errors: 2, Skipped: 8.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 11, 2013

What do you think. Should we also drop general support for Symfony 2.1? See #53 (comment).

I removed the travis test because it would fail all the time on one of the methods of UploadedFile (can't recall which), so I think it would make sense to raise the deps yes.

Hmm. Strage thing happing on my local machine. The travis build of mitom/OneupUploaderBundle@71d6118 seems to pass. However, locally I get 2 errors. Any idea what this could be?

// edit: composer updated, latest commit pulled

That is strange. I just tried it with a fresh checkout, and everything went smooth. I can't tell what's wrong with it at your side as travis seems to be able to work with it too. What php version do you have? (I don't really think it matters tbh)

What you could try is add a sleep or exit or whatever before the for loop in one of those methods and dump out what is it going to loop through, then check manually if the files are in place.
Something like

    public function testUpload()
    {
        var_dump($this->payloads);
        exit();
        for ($i = 0; $i < $this->numberOfPayloads; $i ++) {
            $this->orphanage->upload($this->payloads[$i], $i . 'notsogrumpyanymore.jpeg');
        }
    }

edit: I just noticed that it say /private/chunks/somefile is missing. That is strange as it should start with /chunks/somefile
you could also dump out the keys it creates in the setUp()

            //gaufrette needs the key relative to it's root
            $fileKey = str_replace($this->realDirectory, '', $file);
            echo "original: $file - parsed -> $fileKey" .PHP_EOL;
            $this->payloads[] = new GaufretteFile(new File($fileKey, $filesystem), $filesystem);

that should give you something similar to this:

Configuration read from /home/milka/Projects/test_oneup/phpunit.xml.dist

.............S...................S..................S..........  63 / 196 ( 32%)
.........S........................S..................S......... 126 / 196 ( 64%)
.........S..................................................S.original: /tmp/storage/chunks/uploaderq19FS0 - parsed -> /chunks/uploaderq19FS0
original: /tmp/storage/chunks/uploadertNYU3l - parsed -> /chunks/uploadertNYU3l
original: /tmp/storage/chunks/uploaderGwvafH - parsed -> /chunks/uploaderGwvafH
original: /tmp/storage/chunks/uploaderleOqq2 - parsed -> /chunks/uploaderleOqq2
original: /tmp/storage/chunks/uploaderVXMHBn - parsed -> /chunks/uploaderVXMHBn
. 189 / 196 ( 96%)
original: /tmp/storage/chunks/uploaderl3MLNI - parsed -> /chunks/uploaderl3MLNI
original: /tmp/storage/chunks/uploaderWSPQZ3 - parsed -> /chunks/uploaderWSPQZ3
original: /tmp/storage/chunks/uploaderktBWbp - parsed -> /chunks/uploaderktBWbp
original: /tmp/storage/chunks/uploaderHy62nK - parsed -> /chunks/uploaderHy62nK
original: /tmp/storage/chunks/uploadervelaA5 - parsed -> /chunks/uploadervelaA5
.original: /tmp/storage/chunks/uploaderNsWCNq - parsed -> /chunks/uploaderNsWCNq
original: /tmp/storage/chunks/uploaderSOG60L - parsed -> /chunks/uploaderSOG60L
original: /tmp/storage/chunks/uploaderAkbBe7 - parsed -> /chunks/uploaderAkbBe7
original: /tmp/storage/chunks/uploadercKo6rs - parsed -> /chunks/uploadercKo6rs
original: /tmp/storage/chunks/uploaderVkmCFN - parsed -> /chunks/uploaderVkmCFN
.original: /tmp/storage/chunks/uploader0wCuT8 - parsed -> /chunks/uploader0wCuT8
original: /tmp/storage/chunks/uploaderJ9Nn7t - parsed -> /chunks/uploaderJ9Nn7t
original: /tmp/storage/chunks/uploaderCqHhlP - parsed -> /chunks/uploaderCqHhlP
original: /tmp/storage/chunks/uploaderDmkcza - parsed -> /chunks/uploaderDmkcza
original: /tmp/storage/chunks/uploaderJqF7Mv - parsed -> /chunks/uploaderJqF7Mv
.....

Time: 2.94 seconds, Memory: 54.00Mb

mitom added 2 commits October 11, 2013 10:29
Raised requirement of symfony/framework-bundle to >=2.2.
@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 14, 2013

Hi @mitom

Without having changed anything, the tests passes suddenly. No idea what that could have been, but it seems that my computer chocked on something.

I've tested your changes and have found some errors (I guess). It could easily be that I'm doing something wrong.

First of all, I tried to use a Gaufrette filesystem for all the tests cases. It will lead to an error. Take a look at https://github.com/sheeep/OneupUploaderBundle/commit/4e36e2b014200a131b84e67e6db13cab71aa7602 for the changes made.

Fatal error: Call to undefined method Oneup\UploaderBundle\Uploader\File\GaufretteFile::getClientSize() in /Users/jimschmid/dev/mitom/sheeep/Tests/Controller/AbstractChunkedUploadTest.php on line 40

Changing the getClientSize to getSize will fix this error, but leads to another:

Fatal error: Call to undefined method Oneup\UploaderBundle\Uploader\File\GaufretteFile::move() in /Users/jimschmid/dev/mitom/sheeep/Uploader/Storage/FilesystemStorage.php on line 26

I also tried to use it in a minimal application. Given the following configuration, it throws a Gaufrette error after the last chunk was uploaded, because the file was rename'd, but the source and target string where exactly equal. GaufretteStorage:131

$this->filesystem->rename($path.$target, $path.$name);

config.yml

knp_gaufrette:
    adapters:
        gallery:
            local:
                directory: %kernel.root_dir%/../web/uploads
                create: true

        chunks:
            local:
                directory: %kernel.root_dir%/../web/chunks
                create: true

    filesystems:
        gallery:
            adapter: gallery

        chunks:
            adapter: chunks

oneup_uploader:
    chunks:
        maxage: 86400
        storage:
            type: gaufrette
            filesystem: gaufrette.chunks_filesystem

    mappings:
        gallery:
            frontend: fineuploader
            storage:
                type: gaufrette
                filesystem: gaufrette.gallery_filesystem
                sync_buffer_size: 1M

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 14, 2013

I also tried to use it in a minimal application. Given the following configuration, it throws a Gaufrette error after the last chunk was uploaded, because the file was rename'd, but the source and target string where exactly equal. GaufretteStorage:131

I looked into it. You did something 'wrong' (only if you want to do it the most effective way) by using different gaufrette filesystems for the chunk and file storage. However this is also a behaviour that is expected to work, so in the end you did something good by trying it out.

I could reproduce the error only on the 2nd attempt to upload a file. The cause of the error was that the file wasn't deleted from the chunk storage after it was uploaded. 47c9e32 takes care of that, hopefully it will work correctly now (or at least I can't reproduce that error anymore)

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 14, 2013

About the other one. I noticed that in the configuration you defined the chunk storage as a gaufrette storage, and the file storage as a local directory.

This is not supported (at least not by me), and SHOULD not be either. There is no valid usecase (or tell me one) where you would need to upload the chunks to a remote directory, and then download and store the whole to a local one. This applies to all the other mappings too. I did try changing all of them, but that ended up in lots of failing validation tests, honestly I haven't checked them. I know that some of them are because of mime-types, which could be helped if the stream_wrapper would be configured too as otherwise it really can't tell the mime of a file.

I personally think that those tests should rather be unit tests with mocked files, as in the current case you can either test a gaufrette chunk storage, or a non-gaufrette one. If the assembling and validating tests would be unit tests, then those functional tests could be way lighter and could be limited to testing only on local files. Sorry to say, that's something I'm not willing to do, that will be test-writing hell :D

I think you should either leave those alone, or rewrite that part of the tests before merging this, if you PR me, I'll continue to merge them into this one. Then again, I'm terribly sorry for not working on that part, but I really, really hate writing tests :(

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 14, 2013

hopefully it will work correctly now (or at least I can't reproduce that error anymore)

Nice. I can confirm that is works now.
I just found one file that leads to an empty extension. Probably a frontend-error (so out of scope for this PR). Could you just quickly try it? http://imgur.com/DEIUQ3V

This is not supported (at least not by me), and SHOULD not be either. There is no valid usecase (or tell me one) where you would need to upload the chunks to a remote directory, and then download and store the whole to a local one.

You're right. There is no valid use case. I just thought it would be the easiest way to test this feature, but never mind. I'll put this fact into the documentation for future users.

Sorry to say, that's something I'm not willing to do, that will be test-writing hell :D

Nah. We'll leave it this way. :) Maybe if I'm facing times of major boredom in the near future, I'll give it a shot.
So far, great work! More feedback to come this evening.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 14, 2013

I just found one file that leads to an empty extension. Probably a frontend-error (so out of scope for this PR). Could you just quickly try it? http://imgur.com/DEIUQ3V

For me it correctly ends up as .png . I use the blueimp frontend and from the config you use fineuploader so probably a frontend thing as you said.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 14, 2013

Hi @mitom

I tested your changes and it seems to work fine. Just pushed some small fixes to your branch (https://github.com/mitom/OneupUploaderBundle/pull/3).

so please don't even think about it until I claim it ready from my side.

What do you think. Is this merge ready? Is there something left that I missed?

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 14, 2013

I added a check to eliminate the possibility of setting it up wrong, one less thing people can duck up :P

What do you think. Is this merge ready? Is there something left that I missed?

Well, I couldn't break it further, so if you couldn't either and you agree with my conclusions and theories which the changes and limitations are based on, then I'd say it is ready

Others also didn't report having any problems with the changes proposed, so I think they either don't care, or agree. I hope if there is anything wrong with it, it will turn out while it's in a WIP branch.

So, if you agree, I'd say it can be merged.

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 14, 2013

I think this is a regression. The chunked upload setting applies to all mappings. IMO it should be possible to use another mapping with a local filesystem. For example: An application should be able to store user avatars to local filesystem where bigger files must be chunked and put to a Gaufrette layer.

Correct, just a sec.

@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 14, 2013

So, if you agree, I'd say it can be merged.

Very well. Let the adventure begin. (Will have to manually merge.)

@mitom
Copy link
Copy Markdown
Contributor Author

mitom commented Oct 14, 2013

(Will have to manually merge.)

Sorry ^^

@sheeep sheeep merged commit 147388f into 1up-lab:release-1.0 Oct 14, 2013
@mitom mitom deleted the release-1.0 branch October 14, 2013 19:19
@sheeep
Copy link
Copy Markdown
Contributor

sheeep commented Oct 14, 2013

That went easy! Yey, we made it! 🎉
Thank you very much for your contributions. Will be honored!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants