Software Engineer

Category: Howto (Page 1 of 3)

Django, django_tables2 and Bootstrap Table

I was always intrigued by Django. After all, it’s slogan is “The web framework for perfectionists with deadlines”. Last year I started a project for a client who needed a web app to manage a digital printing workflow. I evaluated Django and did their tutorial (which is really well made by the way). Since the project also required lots of data processing of different data sources (CSV, XML, etc.) Python made a lot of sense. So in the end the choice was to use Django.

I needed to create several tabIes showing data from the Django models. In this post I explain how I combined django_tables2 (for the table definitions) and Bootstrap Table (for visualizing the tables and client-side table features).

Continue reading

Set Up Debian

Here are the steps I use to set up and configure a fresh install of Debian on a server.

  1. Log in as root: ssh root@<ip or domain.tld>
  2. Change the root password: passwd
  3. Update the system: apt update && apt upgrade
  4. Configure timezone: dpkg-reconfigure tzdata
  5. Configure locales: dpkg-reconfigure locales
  6. Install your favourite text editor (here nano) and make it the default:
    1. apt install nano
    2. update-alternatives --config editor

Now, create a user for yourself that you will be using and give this user rights to run commands that require root privileges:

  1. Create new user: adduser <username>
  2. If sudo is not installed: apt install sudo
  3. Add your user to the sudo group: usermod -aG sudo <username>
  4. Now, try to log in from a second terminal using that user
  5. Optional (but strongly recommended): Add your public key to log in without a password: ssh-copy-id <username>@<server IP/domain>

Now that you have your own user, let’s harden the SSH daemon by changing the port and restricting root access from the outside.

  1. Edit the SSHD config: nano /etc/ssh/sshd_config
  2. Change Port to something other than the default 22
  3. Change PermitRootLogin to no
  4. If you want to disable logins by password and only allow key-based authentication, change PasswordAuthentication to no
  5. Restart SSHD: sudo systemctl restart ssh
  6. Important: Try to log in from another terminal first to ensure it is working as intended (use ssh -p <newPort> if you changed the port)

Now, install a firewall (here ufw) to only open the ports that you really need:

  1. Install ufw: apt install ufw
  2. Create rule to allow SSH port: ufw allow <sshPort>/tcp (if you use the default port you can also use ufw allow OpenSSH)
  3. You can also rate limit a port (6 or more connections within 30 seconds): ufw limit <port>/tcp
  4. Ensure that your rule is correct, otherwise you will lock yourself out in the next step
  5. Enable ufw: ufw enable
  6. Try to log in from another terminal to verify it is still working.

That’s pretty much it. You might also want to set up msmtp so that you receive email from your system, cron etc. There are also the following packages I find useful and install:

  • htop: Allows to interactively monitor the system resources and processes.
  • icdiff: A nice tool providing side-by-side comparison with color highlighting.
  • dnsutils: Essential for diagnosing/testing network stuff. For example, it provides dig.
  • ntp: Time synchronization.
  • curl
  • ncdu: Nice tool to find big files.
  • tree: A nice tool to show directories in a tree-like format.

Notes on traefik v2, Nextcloud, etc.

Now that the Raspberry Pi is set up and Docker ready to be used, Gitea is running nicely. However, without TLS and just accessible by the IP address and port. So before setting up Nextcloud, I wanted to get a reverse proxy ready that also takes care of TLS termination. I use traefik (v2.2) which supports/integrates with Docker. Here I document how I configured it to put all my services (this includes Pi-Hole and my router’s web interface) behind the reverse proxy with TLS.

At the end, I’ll briefly note how Nextcloud is set up.

Continue reading

Migrating Data to Nextcloud

If you need to migrate your data to Nextcloud you probably don’t want to upload all your files through the web interface. I suggest to first try it on with a small amount of data (for example, one folder) to verify that it works. Since Nextcloud runs with Docker in my case some of the commands are specific to that, but if you don’t you can just use the main command that is executed.

As previously noted, my data was “in the cloud” so not already on an external drive (besides my backup of course). In general, although it depends on the size of your data, I therefore suggest to copy the data on to an external drive instead of copying it over the network from your machine to your server. Unless the server is not physically accessible of course (such as a home server). Even then, you can use the same procedure I used.

For the external drive, I formatted it as ExFat so that my Mac can write to it. On the Raspberry Pi, I had to install exfat-fuse. I tried FAT32 as it is supported out-of-the-box by both but it has a limiation on maximum file size. I considered using ext4 but write support on the Mac supposedly is not stable.

If you are copying it directly to the Nextcloud destination, skip this step: First, copy the data to the external drive: rsync -rvia --exclude ".DS_Store" <src> <dest>. We are using rsync to synchronize the files. The important argument is a which preserves attributes, such as when the files were last modified. Once it is done, plug it in to your server and mount it.

Now copy it to the final destination. If you are using a Docker bind mount, you can directly copy them there. If you are using a named volume, I believe you need to go through a temporary container to be able to copy files there. When Nextcloud adds files and directories it sets certain permissions (0755 for directories, 0644 for files). Compared to the rsync command above we can fortunately add another argument that allows setting these right away while copying the files.

rsync -rvia --chmod=D755,F644 --exclude ".DS_Store" <src> <dest>

I left the --exclude here in case you skipped my first step. Also note that there is a difference between specifying a trailing slash on the source directory or not.

Now, hand ownership of the data to www-data (or whichever user your webserver is set to): docker exec chown -R www-data:www-data /var/www/html/data/<user>/files

Nextcloud however does not really “know” about these files yet. Specifically, there is nothing yet in the database regarding these files. To change that, make Nextcloud scan for files and add them: docker exec -u www-data nextcloud-app php /var/www/html/console.php files:scan --all

Depending on the size of your data this can run a few minutes (in my case ~3:30 minutes for almost 20000 files). At the end you should see something like the following output and the files should be shown to you by Nextcloud.

Starting scan for user 1 out of 2 (user1)
Starting scan for user 2 out of 2 (user2)
+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 460     | 16589 | 00:03:27     |
+---------+-------+--------------+

Original Source: Tutorial: How to migrate mass data to a new NextCloud server

Notes on Docker

I’ve never really followed the hype around Docker but to be honest also never really taken the time to look into it more. That’s until my friend Harald told me that he is using it on his Raspberry Pi to run some services. What sounded appealing is that you can reproduce builds, you are not “polluting” the host system, you can keep all configs etc. in one place, and move your services somewhere else quickly. The latter is especially interesting when you want to reinstall the host system. Furthermore, you can put the build as well as configuration in version control. Of course, you are adding another layer of complexity in the mix. I thought I’d give it a try. Here are some notes pertinent to the setup with my Raspberry Pi.

Continue reading
« Older posts

© 2024 Matthias Schoettle

Theme by Anders NorenUp ↑