Pure-FTPd and DirectAdmin

A few days ago I set out to do a 5 minute task that turned into hours of head scratching, and some learning. This is the story of how I spent hours trying to set up an FTP account.

The task was really simple: set up a new ftp account on my server so that a client can upload files that will be processed by the system we are building. I thought I’d be done in 5 minutes. Just log in to the DirectAdmin (DA) panel, create a new account, and share the credentials. However, as it always happens, things did not go as planned. After setting up the new account I was unable to log in.

In the past, I had noticed that the system monitor in DA showed the ProFTPD service as stopped, but since I don’t really use FTP, and the existing FTP accounts worked well, I didn’t think much about it. But now it was time to pay a little more attention to that.

I went ahead and tried to restart the proftpd service

service proftpd restart

What was my surprise when the command came back saying that the service was not recognized. What is running the FTP then? Time to find out

service --status-all | grep "ftp"

I quickly learned that my server was in fact running Pure-FTPd, so I searched for its documentation. I learned about virtual users, which was my first nice piece of knowledge that I got this time, specially because in the past I had wondered how the FTP users worked in the server because there were no accounts that matched the FTP users. Back when I had a localhost running FTP for development I had to create system accounts for every FTP user I wanted to add, but not on this server.

Watching the log files I learned that 1) Mac’s finder will strip anything after, and including the @ symbol if it appears in the username, which was the case with my username. and 2) that once I started using an FTP client that did not have that behavior the server would just not let me in and all it said was that the authentication was wrong. This got me running in circles changing the password and typing it as carefully as I could just to make sure it was OK. But no luck.

Reading some stackoverflow answers I started experimenting with the pure-pw command, and it kept complaining that the /etc/pureftpd.passwd file was not found. There was however a /etc/proftpd.passwd file. I decided to re-create the Pure-FTPd database using that file.

pure-pw mkdb -f /etc/proftpd.passwd

After doing this I could finally log in via FTP. It took me about two hours to get to this point, and while I could just call it a day, send the credentials, and go home, something kept bugging me. Why would DA not be able to properly manage my FTP users. Also, at this point I had noticed that while I had changed the password for my FTP account not long ago after the first time I had issues with FTP, the password hadn’t actually been set. What was going on?


So, let me open this parenthesis to explained what happened before, and how this problem has been there for a while now without me noticing. A couple of weeks ago I was looking for some files that I thought may be on my server (they weren’t). I wanted to FTP into the server to see if the files were there. I could not FTP. Since it had been a long time since I had used the FTP account, I decided to just go into DA and reset the password, but no matter what I set the password to, I could not log in. Finally, I decided to contact support, and ask them to look into the problem. I changed the admin DA password to a temporary password, and sent it to them. They looked into the issue, and after some back and forth, they came back saying that the problem was fixed. When I asked what had happened, they just said there were inconsistencies with the login system. I went back into DA and changed the password again, but since I had already tested that the FTP account worked before changing the password, I didn’t bother to test again after changing the password. Yesterday I realized that password change hadn’t taken effect.


I now know that support simply set the password to what I told them the password was, which is why the account worked after they looked into the issue. However, they didn’t realize that changing the password in DA did not have any actual effect on the account.

At this point it became clear that DA thought it was dealing with ProFTPd, as evidenced by the system monitor saying ProFTPD was stopped. After reading documentation and experimenting I finally set the correct ftp setting on DA.

I should mention that I spent quite some time digging on the custom build script and learning a bit about its internals. But since I didn’t want to re-install the FTP service, I ended up using the directadmin binary to set the ftp configuration. You can also do this manually by editing the directadmin configuration file. Just like you can manage your ftp accounts by manually editting the passwd files.

directadmin set pureftpd 1 restart

After doing this, things now work correctly, except that DA insists on adding @domain.com to the FTP user names. That is something I was not able to get rid of from the DA panel, but I can easily edit the passwd file, change the name there, and rebuild the db. I just need to remember not to modify the FTP accounts from DA because that will bring the old username format back.

A couple of weeks ago I also had to set up an SFTP account, but that is a topic for another post. However, I’m mention it here because I want you to stay tuned for that article because using plain FTP is not recommended. Unfortunately, the client had issues on implementing a service that used that SFTP account, and requested a simple, plain, old FTP account. The data we are dealing with is not sensitive at all, and it is in fact information that we WANT to have available to the general public. We just want to present it in a nicer way, which is why we process it first. However, I do not recommend using plain FTP if you can avoid it. There are better options, such as FTPS and SFTP.

Python Issues on npm install

I was trying to run npm install on a container, but I kept getting a Gyp error. There was a long log, but the error that to me seemed to be the problem was :

Error: `make` failed with exit code: 2

I saw on stackoverflow that this problem was likely because I was using a version of node that was not compatible. I should use a lower version. At the moment I was using the latest node image from docker hub, which was version 16 on alpine. I decided to pull version 15, and try there.

After trying with version 15 I started getting another error:

Error: Can't find Python executable

Stackoverflow suggested installing python 2, but at this point I remembered one of my team mates saying this project had to run on node 14. I pulled the node:14-alpine image, and run npm install there. This time the installation went as expected with not Gyp errors.

In case you are wondering, the project I’m working on uses ddev, but since ddev is for Docker, and on linux I use podman, I can’t really run the project using ddev on my laptop like I do on my Mac desktop computer. I know docker runs on linux too, but I like podman better. In order to keep things simple I run the project on a pod that has a container for the db, and another one for the web server. I use php on apache. And to manage dependencies for the front and the back end I use independent containers for node and composer. I will write an article on how to do that. Keep tuned.

Promise Poisoning

Javascript has this powerful concept of promises. A promise is the representation of the completion of an operation that is asynchronous in nature. Because JS is single-threaded, it operates with a queue. Things enter the queue, and they are processed in the order in which they entered the queue. However, some operations may take a long time to complete, which would cause problems because javascript is single-threaded. To solve this problem, many operations can be made asynchronously, but that introduces a new type of challenges. How do you know when the asynchronous operation completes? How can you get the result of such operation? How can you know if the operation failed?

In the beginning we had callbacks. Callbacks where these functions you passed to the asynchronous operation, which would be called at the end of the operation. The result of the operation would be passed as a parameter to the callback function. This quickly became a problem because when many asynchronous operation were chained together you’d end up with a problem known as callback hell.

Many clever people, myself included, quickly realized you could fake events to get out of callback hell, but that too resulted in a weird code style, and it usually required some boilerplate code.

To solve these issues, Javascript implemented Promises. In a way, they look like the event-based solutions, but they get rid of the boilerplate, and are a native part of the language. However, I recently realized that once you introduce a promise to a function, it starts poisoning everything…

Take for example a system that needs to log into a secondary system. At this point you have many options to store your credentials. Say you decided to store them in environment variables using node’s dotenv module. Your code would look something like:

function getCreds() {
  return {
    user: process.env.USER,
    pwd: process.env.PASSWORD,
  }
}

Pretty straight forward. Now, some time in the future the system changes to use dynamically generated passwords that you need to retrieve from a given url. At this point an asynchronous operation (retrieving the creds over the network) is introduced. You decide to use a promise for that:

functoin getCreds() {
  // now returns a Promise to be resolved when the network trip completes and you get the creds.
}

This may not seem like a problem, but at the very least there is another part of your code you need to update: the function, or functions that use getCreds. For example, if you had this as part of your original consuming function:

const creds = getCreds();

you need to change that to something that assigns creds the actual credentials, and not the promise returned by getCreds. The easiest way is to use await:

const creds = await getCreds();

but then, you may have to declare the consuming function as async if it isn’t already. Declaring a function as async makes it return a promise. Now you have two functions that return a promise when you had none. This same pattern can bubble up until it reaches a function that was already working with promises. This effect is what I call Promise Poisoning.

To be honest, this is not a huge problem. It is something that can easily be dealt with in most cases. But in edge cases, it has the potential to completely reshape a piece of your program. So, while it is not something to fear, it is definitely something to keep in mind whenever you need to introduce promises in existing code.

Podman: Fixing Undefined Class Constant MYSQL_ATTR_USE_BUFFERED_QUERY

I’ve recently been spending some hours setting up a small script to set up local web development environments. The idea is to have a minimal set up from where anyone could start a web development project using php and mysql. I wrote it primarily to set up a local environment for work on my new laptop. At work we are currently developing a system using Drupal 9, which requires the pdo_mysql extension, but the php image I’m using does not come with that extension installed.

At some point I started getting the error: Error: Undefined class constant ‘MYSQL_ATTR_USE_BUFFERED_QUERY’. At first I didn’t know what this was all about, so I did a quick search that landed me to the help I needed to figure out the problem: I was missing the pdo_mysql extension that defines that constant. From there it was a matter to just figure out how to install it in my container.

My first attempt was to apt-get install the php-mysql package. I know the base image I’m using uses Debian, so I thought that I could just do that, and be done with it. However, I got an error that is actually documented on the image’s page: “E: Package 'php-mysql' has no installation candidate“. I knew the php image provides helper scripts to install and configure php extenssions, so I quickly realized I was doing it wrong.

The correct way to install the pdo_mysql extension is using the docker-php-ext-install helper script. The only thing I needed to do was to add this line to my Dockerfile:

RUN docker-php-ext-install pdo_mysql

and then re-build the container using my script, which I will be publishing later.

At the moment of writing, I still haven’t completely set up my local environment because I’m still facing issues that seem to be related to the database, but I get closer each time, so I’m confident I will be able to have it running soon.

Dual Boot Windows & Fedora 33

I recently decided to upgrade my laptop to get a little better performance, and to be able to sell the old one while it wasn’t still that old. I like to take good care of my gadgets so that when I decide to upgrade them, I can still sell them in good condition. I heard back from the new owner, and she is really happy with the purchase she made, and has seen an improvement in her work life because of the excellent condition of the laptop I sold her.

I, too, wanted to see a little improvement in my work life, and even though my laptop is not my main work computer, I do use it to work at home a few times a week. I decided to go from an intel i5 to an i7, and from a mechanical hard drive, to an SSD. The new laptop came with a touch screen as well, and I’m finding that to be a nice convenience to have. So far I’m happy with the improvements, but every upgrade comes with the need to do some set up.

The first thing to do was to install a new OS. My choice was to stay with Fedora, although I did consider some more lightweight alternatives. At the end I decided to stay with Fedora for now, mostly to be able to compare my old laptop with the new one under basically the same conditions. However, because I’m always thinking of re-selling my gadgets when they are entering their mid-life, or sometimes even earlier, I know I have to keep the Windows OS, because most people will not want to buy a computer that is Linux-only.

When I first bought my previous laptop, I decided to try to work with Windows, and did my best to like it, but at the end I found it is just not what I want. This time I decided to just go straight to the Fedora website, and get the installer. There are a couple of ways you can go about installing Fedora. I decided to use Fedora Media Writer to create a bootable USB drive. While the bootable drive was being created, I shrunk the main disk partition to leave about 750GB of space for the Fedora installation. You can create a new partition within the installation process, but I decided to use the Disk utility in windows to create a partition right from within Windows. When everything was set up, I restarted the computer, and started the installation process… It did not work.

The problem I was having was that the installation program did not detect my HD. I had no where to install Fedora. A quick search revealed that there was a BIOS (UEFI) setting that needs to be changed to allow Fedora to see the HD. I restarted the computer again, booted into Windows, and went to the BIOS settings. Then I changed the SATA Operation option from RAID On to AHCI, and restarted the computer to start the installation process again. This time the installation program detected the partition I had made previously. From there it was just a matter of following the installation steps, and I was all set up… to start setting up Fedora for development, but that is another story.

At this point it is important to note that once the SATA Operation setting has been changed, you will most likely not be able to boot into Windows anymore. To fix this, you can re-enter the BIOS set up, change the setting from AHCI to RAID On, and restart the computer. On the OS selection screen choose Windows, and you will be able to boot Windows again. You will have to change the SATA Operation mode again to boot into Fedora. I do not recommend doing this often because I do not know the damage, if any, that this could cause to your system. In my case this is fine because the only reason I want to be able to boot into windows again is to be able to remove Fedora, and leave Windows as the only OS in the machine when I’m ready to sell it.

While it is true that I could create a recovery disk for Windows, and get rid of it all together. I decided to leave windows, because even though I don’t plan on using it, it is nice to know it is there in case I have no other option.

Laravel’s Debugbar and Ajax Requests

Laravel has a nice package for debugging called Debugbar, which is basically an integration of the php Debugbar so that it works out of the box with laravel. Debugbar has the nice feature of working with Ajax request, displaying information about the request as well as passing messages sent by the server, but on a project I’m working on, this little feature wasn’t working.

After some head scratching, I realized that with every ajax request, in the Firefox network console, I could see an accompanying request with error code 500. The error reported was a missing file in the storage/debugbar/ directory. I suspected that Debugbar had tried to save the info there, but could not, and now it was trying to get it, but it could not find it. In order to fix this problem, I had to modify the permissions on that directory so that the user that php runs as could write to that directory. In my case, I simply granted write permissions to “others” using the command line tool chmod, but this may not always be the best way to solve this issue because it lets anyone who can access that directory write to it. A better solution would be to grant write privileges to the appropriate users only, but since I’m working on a dev environment, granting write privileges to everybody was OK.

Setting Up Cron on a Remote Server

Beware: This post isn’t about setting up cron jobs, but about setting up cron on a server where it is not installed.

Yesterday I had to set up a cron job for a small script on a server. I logged into the server via SSH, and ran crontab -e, but was immediately surprised by an error:

-bash: crontab: command not found

What is this? Isn’t cron installed by default on all systems? Well, apparently not. Luckily, fixing this is rather simple. You need only do three things: 1) install cron, 2) start cron, 3) set cron to start with the system.

  1. Installing cron is as simple as running dnf install cronie. Here I need to make two remarks. 1) I was logged in as root. When I first created this server I was given a root user and password. I immediately created a new user to use with this server. This new user can’t even sudo, so to perform administrative tasks I need to log in as root. 2) This server is running Fedora, so you may need to use a different package manager than dnf, but the package name ie cronie should be the same.
  2. Starting cron is as easily achieved by systemctl start crond.service or service cron start depending on your system.
  3. Setting up cron to start with the system can be achieved by systemctl enable crond.service.

It is likely that cron is set up and running on your system, but if you are not sure, you can run systemctl status crond.service or service cron status.

Writing expressive code

I was just writing a small piece of javascript code, and one of the things I need to do is get the contents of a node which has a span and a text node. The idea was to get the contents of the span, and the rest of the text in two separate variables. My first instinct was to grab the contents of the span, and then grab the contents of the text node that followed the span, but I discovered that there were actually two different text nodes. One was empty, and the other one had the text I wanted to get. Since this is an action that needs to be repeated a number of times and there is no guarantee that there will always be two different text nodes, I decided to just grab the content of the span, and then the full textContent of the parent element, and just remove the text in the spam from the full text.

When the time came to remove the contents of the span from the full text, I decided to use the slice method of String. This method takes a starting offset, and an ending offset. So, if I have the text “Hello”, and I slice starting at index 1, I would get back “ello”. In my particular case, the full text starts with a code, and then a description. For example, “ABC Some description”. The idea is to get a variable with the value “ABC”, and another one with the value “Some description”. So, all I had to do was slice the full string starting at the index equivalent to the length of the code string, which is what is contained in the span element. But if you pay attention, you will realize a small problem: I would end up with the value ” Some description”. Notice the leading space at the start of the string.

At this point, many programmers would just call slice with the starting index set to the length of the code string plus 1. This gets the job done, but later in the future you, or some other programmer who inherits your code may wonder why you added 1 to the length of the code string used as the starting offset. Some programmers will tell you that that is what comments are: to provide clarification on situations like this one. But I’m recently of the idea that comments in code are often just noise. And yes, I know this goes against most advise you’ve heard around. I’m not saying comments are bad, I’m saying if your code needs comments, your code is probably bad. And by “bad” I mean, not expressive enough.

It turns out there is a better solution for this: use trim(). The trim method is very expressive. It tells you “Hey, this string may or may not have leading or trailing spaces that I don’t want, so I will remove them”. Yes, it says all that in just 4 characters…

The point is, if you need to write a comment in your code, pause for a second and think “Is there a way to say in code what I’m trying to say in comment?”

Composing functions with PHP

Composition is a great way to build complexity. You take small functions and combine them to build more complex functionality. For example, suppose you have the string “$12,000” and you need to take 10% off that and represent it as a money string. You could build a single function that does exactly that, and then go about your life until you discover that your function has a bug in it, or that it can’t be reused for a similar problem, like taking 15% off the string, instead of 10%. At this point you may be tempted to alter your function to take a parameter indicating the % to subtract from the number. This adds more complexity to a function that is already doing more than it should, and it doesn’t guarantee you won’t have to change it later on again. For example, what if now you need to be able to do that for any arbitrary money string, not just “$12,000”?

The answer is composition. Instead of writing a single, bug-prone, function that does way more than it should, you should write smaller functions that are easier to reason about, maintain, and which are more likely to be bug free.

Assuming you are guaranteed 100% that the input will always be in the right format, meaning, a formatted number preceded with a dollar sign, no decimal point, and that uses commas to separate thousands, you could write the following functions:

stripDollarSign
removeThousandsSeparator
takePercentageOff
formatAsMoney

The first function would take a string, and return a string but without the dollar sign. It would be trivial to write, and almost guaranteed to be bug free.

The second function takes a string, removes the commas from it, and returns an integer. (Remember we are guaranteed the right format, so we can assume the remaining string can be cast into an integer). This function is also trivial to write and almost guaranteed to be bug free.

The third function takes an integer, subtracts a percentage from it, and returns another integer.

The last function takes an integer and formats it as a money string. Now all you have to do is call the functions one after the other:

formatAsMoney(takePercentageOff(removeThousandsSeparator(stripDollarSign("$12,000"))))

But this looks rather ugly, and it is hard to read. What we need is to be able to combine those four function into a single function that does what we want. That is what composition is.

Composition comes straight from functional programming, and it is such a great idea that I wanted to bring it to the PHP world, so I wrote the following function:

<?php
/**
 * Compose two functions.
 *
 * Returns a new function that calls $a after $b.
 *
 * @param callable $a
 *   The function to apply last.
 * @param callable $b
 *   The function to apply first.
 * @return callable
 *   A function that calls $a after $b.
 */
function compose(callable $a, callable $b):callable {
  $composed = function ($param) use ($a, $b) {
    $partial = call_user_func($b, $param);
    $whole = call_user_func($a, $partial);
    return $whole;
  };

  return $composed;
}
?>

This function is far from perfect, but it is a starting point. I decided to only take 2 function as parameters, instead of an arbitrary number of functions because you only ever compose two functions. When you think you are composing more functions, such as in our example, you are really composing them in pairs. You first compose the first two functions, and the returning function is composed with the 3rd one and so on.

I was so happy with my compose function that I considered putting it in packagist, but I think the Functional PHP package by lstrojny is a better alternative since it includes its own version of the compose function. The one thing I’m not so happy about is that the version in Functional PHP doesn’t seem to be able to take function names as parameters. This may limit the functions that you can pass to it to compose.

You may have noticed we didn’t solve the issue with wanting to take a different percentage amount off of the original amount. This can be solved easily with another concept from functional programming: partial application. But I will leave that for another time.

Log From Within Puppeteer page.evaluate

A couple of days ago I was trying to find out why a small puppeteer script wasn’t working correctly. I wanted to see what was going on within a callback function passed to page.evaluate. The problem is that page.evaluate runs in browser context, so the console.log calls you make will log inside the Chrome instance, and won’t reach your console. Luckily, puppeteer has a way to bring those logs to your console:

page.on('console', function (msj) {
  console.log(msj.text());
})

The msj argument on the callback is an instance of consoleMessage which has a few nice methods that let you work with console messages. One of them is the type() method, which returns the type of message that was logged. This lets you filter out a bunch of warnings that some pages may generate:

page.on('console', function (msj) {
  if (msj.type() === 'log') {
    console.log(msj.text());
  }
})

I was pleased to find that puppeteer has this nice method to bring console messages from browser context to script execution context. This made my work a lot easier.