If you need to migrate your data to Nextcloud you probably don’t want to upload all your files through the web interface. I suggest to first try it on with a small amount of data (for example, one folder) to verify that it works. Since Nextcloud runs with Docker in my case some of the commands are specific to that, but if you don’t you can just use the main command that is executed.
As previously noted, my data was “in the cloud” so not already on an external drive (besides my backup of course). In general, although it depends on the size of your data, I therefore suggest to copy the data on to an external drive instead of copying it over the network from your machine to your server. Unless the server is not physically accessible of course (such as a home server). Even then, you can use the same procedure I used.
For the external drive, I formatted it as ExFat so that my Mac can write to it. On the Raspberry Pi, I had to install exfat-fuse
. I tried FAT32 as it is supported out-of-the-box by both but it has a limiation on maximum file size. I considered using ext4 but write support on the Mac supposedly is not stable.
If you are copying it directly to the Nextcloud destination, skip this step: First, copy the data to the external drive: rsync -rvia --exclude ".DS_Store" <src> <dest>
. We are using rsync to synchronize the files. The important argument is a
which preserves attributes, such as when the files were last modified. Once it is done, plug it in to your server and mount it.
Now copy it to the final destination. If you are using a Docker bind mount, you can directly copy them there. If you are using a named volume, I believe you need to go through a temporary container to be able to copy files there. When Nextcloud adds files and directories it sets certain permissions (0755
for directories, 0644
for files). Compared to the rsync
command above we can fortunately add another argument that allows setting these right away while copying the files.
rsync -rvia --chmod=D755,F644 --exclude ".DS_Store" <src> <dest>
I left the --exclude
here in case you skipped my first step. Also note that there is a difference between specifying a trailing slash on the source directory or not.
Now, hand ownership of the data to www-data
(or whichever user your webserver is set to): docker exec chown -R www-data:www-data /var/www/html/data/<user>/files
Nextcloud however does not really “know” about these files yet. Specifically, there is nothing yet in the database regarding these files. To change that, make Nextcloud scan for files and add them: docker exec -u www-data nextcloud-app php /var/www/html/console.php files:scan --all
Depending on the size of your data this can run a few minutes (in my case ~3:30 minutes for almost 20000 files). At the end you should see something like the following output and the files should be shown to you by Nextcloud.
Starting scan for user 1 out of 2 (user1)
Starting scan for user 2 out of 2 (user2)
+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 460 | 16589 | 00:03:27 |
+---------+-------+--------------+
Original Source: Tutorial: How to migrate mass data to a new NextCloud server