Skip to content

Conversation

@andrueastman
Copy link
Contributor

@andrueastman andrueastman commented Nov 11, 2019

Changes proposed in this pull request

This work is done in accordance with the spec found at this link. https://github.com/microsoftgraph/msgraph-sdk-design/blob/master/tasks/FileUploadTask.md

Essentially, one may perform a large upload by following the steps below.

  1. create an upload session
// Create upload session 
// POST /v1.0/drive/items/01KGPRHTV6Y2GOVW7725BZO354PWSELRRZ:/SWEBOKv3.pdf:/microsoft.graph.createUploadSession
var uploadSession = await graphClient.Drive.Items[itemId].ItemWithPath("SWEBOK.pdf").CreateUploadSession().Request().PostAsync();
  1. Create the task using the session.
// Create task
var maxChunkSize = 320 * 1024; // 320 KB - Change this to your chunk size. 5MB is the default.
var largeFileUpload = new LargeFileUpload(uploadSession, graphClient, stream, maxChunkSize);
  1. Create an upload monitor
    public class MyProgress : IProgressCallback
    {
        public void OnFailure(ClientException clientException)
        {
            Console.WriteLine(clientException.Message);
        }

        public void OnSuccess(DriveItem result)
        {
            Console.WriteLine("Download completed with id below");
            Console.WriteLine(result.Id);
        }

        public void UpdateProgress(long current, long max)
        {
            long percentage = (current * 100) / max ;
            Console.WriteLine("Upload in progress. " + current + " bytes of " + max + "bytes. " + percentage + " percent complete");
        }
    }
  1. Upload the file
uploadedFile = await largeFileUpload.ResumeAsync(new MyProgress());

Alternatively, one may choose to do the file upload in slices by themselves by replacing step 3 and 4 with

// Setup the chunk request necessities
var slicesRequests = largeFileUpload.GetUploadSlicesRequests();
var trackedExceptions = new List<Exception>();
DriveItem itemResult = null;

//upload the chunks
foreach (var request in slicesRequests)
{
    // Send chunk request
    var result = await largeFileUpload.UploadSliceAsync(request, trackedExceptions);
    // Do your updates here: update progress bar, etc.
    Console.WriteLine($"File uploading in progress. {request.RangeEnd} of {stream.Length} bytes uploaded");

    if (result.UploadSucceeded)
    {
        itemResult = result.ItemResponse;
        Console.WriteLine($"File uploading complete");
    }
}

Notes

One can find a sample app built upon this PR(using this repo as a git submodule) to showcase and test the above usage at the following link.
https://github.com/andrueastman/UploadFileSample

8ggmaker and others added 4 commits August 20, 2019 23:11
Avoid extra memory allocation when doing chunked upload for large file
- Wraps the chunkedUploadProvider in another LargeFileUpload
- Prevents breaking changes by keeping obsolete method
- Performance tests done
@andrueastman andrueastman self-assigned this Nov 11, 2019
@andrueastman andrueastman marked this pull request as ready for review November 11, 2019 09:30
Copy link
Collaborator

@MIchaelMainer MIchaelMainer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to review a bit more.


if (result.UploadSucceeded)
{
progressCallback?.OnSuccess(result.ItemResponse);// update the caller with sucess
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

/// <param name="request">The UploadChunkRequest to make the request with.</param>
/// <param name="exceptionTrackingList">A list of exceptions to use to track progress. ChunkedUpload may retry.</param>
/// <returns></returns>
public virtual async Task<UploadChunkResult> GetChunkRequestResponseAsync(UploadChunkRequest request, ICollection<Exception> exceptionTrackingList)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this may be better called UploadChunkAsync().

namespace Microsoft.Graph
{
/// <summary>
/// Interface for to enbale handling of progress/errors for a long upload process.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo.

Additionally, we should make this generic enough to be used in other progress scenarios. This should be applicable to upload, download, or any other scenario where progress needs to be captured.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/// Handler for scenario when process completes successfully
/// </summary>
/// <param name="result">The resulting file from the upload.</param>
void OnSuccess(DriveItem result);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to return a DriveItem? Could we make this some sort of result object? I'd like us to be reuse interface for other progress scenarios.

@darrelmiller
Copy link
Contributor

99% of this code should be in Core. We need to make this generic so that it is not tied to DriveItem. We will want to use this code for large attachment uploads also.

@darrelmiller
Copy link
Contributor

We cannot use the term Chunk anywhere in our interface or samples. This is not chunking.

/// <param name="readBuffer">The byte[] content to read from.</param>
/// <param name="exceptionTrackingList">A list of exceptions to use to track progress. ChunkedUpload may retry.</param>
/// <returns></returns>
[Obsolete("This overload is obsolete. Please use the overload that does not use the unnecessary a readBuffer parameter")]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's worth informing our customers that the readBuffer value passed here will be ignored.

/// <param name="progressCallback"><see cref="IProgressCallback"/> object to monitor the progress of the upload.</param>
/// <returns>Item information returned by server.</returns>
public async Task<DriveItem> UploadAsync(int maxTries = 3, IEnumerable<Option> options = null)
public async Task<DriveItem> UploadAsync(int maxTries = 3, IEnumerable<Option> options = null , IProgressCallback progressCallback = null)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @darrelmiller noted, lets consider using .Net's IProgress interface here.


await this.UpdateSessionStatusAsync().ConfigureAwait(false);
uploadTries += 1;
if (uploadTries < maxTries)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This retry logic will result in 3^3 retries (by default) for 429, 503 and 504 HTTP responses when the service doesn't return a successful result. The underlying HTTP client already has a RetryHandler that handles retries for 429, 503 and 504 responses.

@Misiu
Copy link

Misiu commented Nov 18, 2019

99% of this code should be in Core. We need to make this generic so that it is not tied to DriveItem. We will want to use this code for large attachment uploads also.

I'm waiting for a large file upload while sending emails. It was announced some time ago (https://docs.microsoft.com/en-us/graph/outlook-large-attachments?tabs=http), but its still unsupported here and in beta lib.

@andrueastman
Copy link
Contributor Author

Thanks for all the feedback on this PR.
I am going to close this PR and limit changes to the service library to fixing the memory issue by creating a separate PR and move the generic UploadTask to Core in a short while so as to support both v1.0 and the beta libs.

@andrueastman andrueastman deleted the andrueastman/FileUploadTask branch December 3, 2019 05:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants