This blog has a number of entries regarding the viability of the Raspberry Pi as a general purpose Linux server for a small network. Our main driver for exploring the Pi as such an option is the fact that we are off-grid, and must therefore be careful about power consumption, especially for a device that will be left on 24 hours a day. We have now run a Pi as a server for our own network services for nearly 3 years. You can read about the migration and what it did for us here.
Meanwhile, from those days, the Raspberry Pi foundation continues to churn out little tweaks that constantly improve the performance of the Pi, and the latest version, the Raspberry Pi 3 Model B+, has a number of little tweaks that make life a little easier. Among these are a slightly faster processor and, alongside the slightly slower Model 3B, the ability to do away with the microSD card to boot the device. I had commented in a previous article the necessity of being a somewhat open-minded system administrator and not dismissing the idea of relying on USB for the disk, which would indeed be best practice, but if you think about it, all disks are plugged on somewhere along the line, so the additional risk is not necessarily a huge one.
There are now options of being able to download disk images of Nextcloud running on optimised setup for the Pi. These options may suit many people. However, one is then dependant on the future of those projects, and perhaps some options you would like to do differently are not available or too difficult. In the ethos of Linux, the Raspberry Pi and Nextcloud, I prefer to be in control of my own decisions when installing such services.
Another recent improvement, which transforms the performance of Nextcloud on the Pi, is PHP version 7. But there are still absolute limits, for example, 977MB of available memory, that need to be kept in mind. So what optimisations can be done in providing a LAMP (Linux Apache Mysql (Mariadb) PHP) stack to make it reliable and usable?
I know enough about Debian, or its Raspberry Pi cousin, Raspbian, to know how to install or strip down to a bare minimum, and ensure there are no extraneous services I do not want on the server. The deborphan package is helpful to this. As you remove things that are not necessary, like the triggerhappy daemon, other things that are not really needed on a server, left-over -dev files and so on, you can run "deborphan --guess-all" to give you an idea of dangling packages. Using this, you can get a basic, but still standard version of Raspbian Stretch, the current version, down to around 30MB of memory or less.
I had looked at the possibility of using Lighttpd or Nginx as the web server, but when running a PH application like Nextcloud, the web server potentially has little to do with performance. In addition, Apache's support for local security and other localisations through the use of .htaccess files does make life easier. I use the mpm_event option, and alter the number of available apache processes to a low number suitable for supporting a low number of users.
MySQL or Mariadb is more of a challenge. The old MySQL 5.7 that came with Raspbian Jessie could be stripped down to use a reasonable amount of memory, but mariadb just needs more. MySQL tends to assume that the system it is running on is only used for that purpose, and all the tuning sites and pages ask for details of the total memory the system has. However, as a long time techie, I have to admit that, when I have the option, I prefer using something other than MySQL. And before Nextcloud came along, it's parent, Owncloud, used to throw horrible wobblies with MySQL whenever the system was upgraded. I discovered that using Postgresql as an alternative, which is fully supported, though Nextcloud recommends MySQL or MariaDB, and many of the update problems disappeared. I also had the feeling, which is hard to prove, that Postresql performed slightly better. But, of greater value on a Pi, the memory use of Postgresql is much more manageable than MySQL, even with drastic memory optimisations. What is more, memory use does not seem to fluctuate as much as Mariadb, again, making running on a Pi easier.
There is one additional optimisation that is difficult to prove, but is a result of years of experience. I wrote a while back about Linux filesystems, noting in particular that the venerable JFS is the only filesystem with which I have never lost data. in addition, an an out-of-date benchmark comparing filesystems noted that not only did JFS get most space from a given partition or disk, but that it was gentlest on the processor. These are important little wheezes when running a Pi. Unfortunately, Raspbian and the Pi's firmware do not support JFS out of the box. It is necessary to install an initrd file, as though the system were an x86 system. This used to be a real pain because the FAT filesystem where the boot takes place does not support file links, so as the initrd changes name when the kernel changes, a lot of manual work has to follow. Fortunately, someone has resolved the problem. This link has a file which, when dropped into /etc/kernel/postinst.d/ creates a new initramfs.gz file automatically. This makes running JFS for data security entirely possible and reliable, BUT there is an additional advantage. A number of benchmarks over the years have shown that the Postgresql database really enjoys the JFS filesystem, so it is a way of getting a little more performance from Postgresql as well.
Nextcloud calls for a local PHP cache, and one option is redis. As most techies will know, Redis has been altering its licence terms as a way of getting more money from large scale users of the product. History shows that when companies that had until that point understood what free and open source software was about made such changes, the writing is on the wall, and it is only a matter of time until that company or software falls into obscurity. Anyway, on a small system, it is doubtful that redis makes that much difference in comparison to simpler options, so PHP is enable with opcache and apcu. Nextcloud helpfully suggests settings for these.
I run php-fpm rather than the apache PHP module. This has long been known to provide better performance, but it can also be set to spin up on demand rather than having to allocate chunks of resource to php.
The result is a system which, with a few users, after a week of activity, database backups etc, uses around 100MB of the 977MB available. The only time memory use is of concern is if someone drops a very large image into the gallery app, and imagemagick has to crunch for a while to resize the image.
There is, however, one aspect that can make the system look laggy, and that is when logging in when the brute-force settings plugin is installed. Whatever that useful security option does to prevent bad actors from trying to access your data, it crunches away for a few seconds before logging the user in. This is only from the web interface; the sync clients on desktops or phones/tablets are unaffected and work fast. After logging in, moving from application to application, while not instantaneous or even lightning fast, is perfectly acceptable and not likely to slow one down.
Apart from our own instance of Nextcloud, I have installed a system based on the above principles elsewhere, and between the two systems, around 3TB of data is accessible. I completed the second installation in Edinburgh over the weekend, and the sense of achievement when the security check returned a green A+, combined with the users commenting on the additional speed and usability, was pretty gratifying.
I think this experience is enough to conclude that, while of course a Raspberry Pi would be slower than a multi-core Xeon server with shedloads of memory, it is perfectly adequate for use for small workgroups, but only if following a path slightly less travelled than the defaults. None of the above optimisations is hard or whacky.
I hope these ideas are helpful.