0

We've a requirement to upload 100MB document in to Sharepoint. I've found an article where they are uploading using chunks with startUpload, continueUpload and finishUpload methods. I've tried using so, but it's taking 60 mins to upload the document. Is it the right code?

public Map<String, String> uploadFile(String folder, String fileName,
            byte[] binary, boolean overwrite,boolean toaddInsideFolder,String localFileURL) throws LoginException, IOException, URISyntaxException {            
        String url=URIUtil.encodePath("/Files/Add(url='" + fileName + "', overwrite=" + overwrite + ")");
        if(toaddInsideFolder)
            url = restServiceURL.toString() + "/GetFolderByServerRelativeUrl('" + folder + "')"+url;
        else
            url = restServiceURL.toString() + "/Folders/GetByUrl('" + folder + "')"+url;
        HttpPost httppost = new HttpPost(url);
        //To receive xml response add below header to http get request
        httppost.addHeader("Accept", "application/json; odata=verbose");
            httppost.addHeader("Cookie", getSecurityToken().getAuthCookiesToken());    
        HttpResponse response = httpclient.execute(httppost);
        String statusLine = response.getStatusLine().toString();
        String jsonresponse = EntityUtils.toString(response.getEntity());
        JSONObject json = new JSONObject(jsonresponse);
        String gUid =json.getJSONObject("d").getString("UniqueId");         
        String endpointUrlS = restServiceURL.toString() + "/GetFileByServerRelativeUrl('"+ folder +"/"+fileName+"')/savebinarystream";
        HttpPost httppos = new HttpPost(url);
        //To receive xml response add below header to http get request
        httppos.addHeader("Accept", "application/json; odata=verbose");
        httppos.addHeader("Cookie", getSecurityToken().getAuthCookiesToken());
        HttpResponse response1 = httpclient.execute(httppos);           
        File file = new File(localFileURL);int fileSize=(int) file.length();
        final int chunkSize = 50 * 200 * 200;
        byte[] buffer = new byte[(int) fileSize <= chunkSize ? (int) fileSize : chunkSize];    
        long count = 0;
        if (fileSize % chunkSize == 0)
            count = fileSize / chunkSize;
        else
            count = (fileSize / chunkSize) + 1;
        // try-with-resources to ensure closing stream
        try (FileInputStream fis = new FileInputStream(file);
                BufferedInputStream bis = new BufferedInputStream(fis)) {
            int bytesAmount = 0;
            ByteArrayOutputStream baos = new ByteArrayOutputStream();
            int i = 0;
            String startUploadUrl = "";
            int k = 0;
            while ((bytesAmount = bis.read(buffer)) > 0) {
                baos.write(buffer, 0, bytesAmount);
                byte partialData[] = baos.toByteArray();
                if (i == 0) {
                    startUploadUrl = restServiceURL.toString() + "/GetFileByServerRelativeUrl('"+ folder +"/"+fileName+"')/StartUpload(uploadId=guid'"+gUid+"')";                       
                    executeMultiPartRequest(startUploadUrl, partialData);
                    System.out.println("first worked");
                    // StartUpload call
                } else if (i == count-1) {
                    String finishUploadUrl = restServiceURL.toString() + "/GetFileByServerRelativeUrl('"+ folder +"/"+fileName+"')/FinishUpload(uploadId=guid'"+gUid+"',fileOffset="+i+")";
                    executeMultiPartRequest(finishUploadUrl, partialData);
                    System.out.println("FinishUpload worked");
                    // FinishUpload call
                } else {
                    String continueUploadUrl = restServiceURL.toString() + "/GetFileByServerRelativeUrl('"+ folder +"/"+fileName+"')/ContinueUpload(uploadId=guid'"+gUid+"',fileOffset="+i+")";
                    executeMultiPartRequest(continueUploadUrl, partialData);
                    System.out.println("continue worked");
                }
                i++;
            }
        }           
        Map<String, String> result = new HashMap<String, String>();
        return result;
    }
    public void executeMultiPartRequest(String urlString, byte[] fileByteArray) throws IOException, LoginException, URISyntaxException {
        HttpPost httppost = new HttpPost(urlString);
        httppost.addHeader("Accept", "application/json; odata=verbose");httppost.addHeader("Cookie", getSecurityToken().getAuthCookiesToken());
            HttpEntity entity = new ByteArrayEntity(fileByteArray);
            httppost.setEntity(entity);
        HttpResponse response = httpclient.execute(httppost);
    }
8
  • At a glance, this code looks appropriate. Can you add timing logs to this code to gain more insight. (It might help to test with smaller files—still big enough for multipart upload—that are still unexpectedly slow). Can you share timing information? Are the chunk sizes appropriate, looks like 2MB. I'm not familiar with sharepoint's limits. Commented Jul 1, 2021 at 9:49
  • @mcint If I keep chunksize to 10MB also,time to upload the entire 60 MB file is 1 hour .Code hangs for sometime at httpclient.execute() in executeMultiPartRequest(). Commented Jul 1, 2021 at 12:07
  • I’m surprised by the slow upload speed on each connection, which is worth following up on. However, it looks like you could get some more speed by having each chunk upload in parallel. httpclient.execute(…) is synchronous, right? So the chunk uploads will happen sequentially, not in parallel. Commented Jul 1, 2021 at 22:16
  • Can you confirm a different client can upload faster? Commented Jul 1, 2021 at 22:26
  • It could be a network limitation on your network, in transit, or coincident with high load on their end. sharepoint.stackexchange.com/questions/193962 or jrjlee.com/2011/07/very-slow-upload-speeds-to-sharepoint.html. See if you can double check Limousine some other way. Another multipart upload app would provide a good comparison and sanity check. Minio might have azure libraries Commented Jul 1, 2021 at 22:34

1 Answer 1

0

I think that baos.write(buffer, 0, bytesAmount) is adding in each loop iteration all the read buffer to baos, so even if the chunk size is 10 MB, in the last iteration it would be 100 MB.

I used this instead:

int bytesAmount = 0;
byte[] buffer = new byte[chunkSize];

while ((bytesAmount = bis.read(buffer)) > 0) {

   byte[] chunk = new byte[bytesAmount];
   System.arraycopy(buffer,0, chunk, 0, bytesAmount); 
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.