Properly Triggering Vue Component Lifecycle Hooks

This is something that got me scratching my head for a little while: If I have a vue component with a mounted hook, how can I trigger that hook every time the component is updated?

The first thought is, if you want a hook that triggers every time the component is updated, you need an updated hook, but that is not always the case. Here is my particular case:

I have a component that renders a list of options. For example, a list of venues. Each venue in turn has a list of sections. The list of sections is managed by another component, and they share data through a parent component. So far we have a parent component that owns the list of venues, and the list of sections.

When the user clicks on a venue, the parent is notified via a custom event, and it updates data that is passed to the list of sections (the venue info). The list of sections component uses this data to fetch the actual list of sections from the server. It does that through a mounted hook. The reason for not passing the list of sections to the component through a prop is that the component should be responsible for fetching that list so that the functionality is encapsulated.

The problem is that the hook will run only once the first time the user clicks on a venue. After that, the venue information it receives from its parent will be updated, but that won’t trigger the mounted hook. We could use an update hook, but since fetching the list of sections from the server will eventually update the component, we would enter an infinity loop. At this point we need a way to re-run the mounted hook, but how?

The solution is to use a key attribute on the sections component, and update it when the venue information is updated, i.e. when the user clicks on a venue. We could use, for example as the key value. This is because changing the key attribute tells vue that the component should be treated as a new instance.

For more information, read the documentation for key

Sending e-mails: Fedora 29, Apache, & Php

In the past, I’ve touched on the subject of how to set up a localhost to send emails. At some point it was enough with just installing postfix and following a few prompts. Then port 25 was blocked by the ISP and it was necessary to set up a postfix to use an external SMTP server. Now, on Fedora 29 I wanted to enable e-mail sending using php’s mail function. It took me  a full day to do it, but finally got it working. It is worth mentioning that this time I didn’t want to set up postfix to use an external SMTP server, although I did try using postfix, but as the logs revealed, port 25 is blocked by my ISP.

One thing to note, that actually cost me a lot of time, is that you won’t find a mail log file in Fedora 29. Fedora uses journal from systemd to do logging now, so you will have to use journalctl to view the logs. Reviewing the logs is how I found out that postfix was timing out when trying to send emails.

In any case, my idea wasn’t really to use postfix. Fedora 29 ships with esmtp which is a “send-only sendmail emulator” according to the man page. The nice thing about it, is that you should only have to make a .esmtprc file on your home directory with the remote SMTP server information. This worked great. I could send emails using `mail` in the command line, and I could even fire up php -a and use the mail function to send out emails. Everything was great until I tried to send emails from one of my locally hosted dev sites (finally getting around to build lol). The email was never even accepted for deliver, as denoted by the false value returned by the mail function.

In order to investigate this matter, I decided to try to send emails as the apache user from the command line. This resulted in a error. The command complained about not being able to create a .esmtp_queue directory on /usr/share/httpd/. I decided to manually create it, and make apache the owner of it. I tested again, and it worked! You should know that at this time my esmtprc is now located at /etc/ so that apache can use it. I suppose I could create at .esmtprc file in the httpd directory, but I haven’t tried that.

I knew now that apache could send emails, so I decided to check again using the locally hosted site, and no luck. mail continued to return false, indicating that the email wasn’t even accepted for delivery. At this point, the only time the mail function returned true was when I used postfix, but I had decided not to use it since esmtp is made specifically for the purpose I intended.

One thing had changed though. Before I manually created the queue directory for the apache user, the mail function simply failed, but now, selinux was displaying an alert, but when I tried to open it, it was blank. I remembered that at some point I had seen an SELinux entry in the logs, so I decided to re-visit the logs and look for a similar entry. I found this:

***** Plugin catchall_labels (83.8 confidence) suggests *******************

If you want to allow mktemp to have write access on the .esmtp_queue directory
Then you need to change the label on .esmtp_queue
# semanage fcontext -a -t FILE_TYPE '.esmtp_queue'
where FILE_TYPE is one of the following: admin_home_t, courier_spool_t, etc_aliases_t, etc_t, exim_log_t, exim_spool_t, mail_home_rw_t, mail_home_t, mail_spool_t, mqueue_spool_t, munin_var_lib_t, postfix_etc_t, qmail_spool_t, sendmail_log_t, system_mail_tmp_t, tmp_t, user_home_dir_t, uucpd_spool_t, var_log_t.
Then execute:
restorecon -v '.esmtp_queue'

***** Plugin catchall (17.1 confidence) suggests **************************

If you believe that mktemp should be allowed write access on the .esmtp_queue directory by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
allow this access for now by executing:
# ausearch -c 'mktemp' --raw | audit2allow -M my-mktemp
# semodule -X 300 -i my-mktemp.pp

I decided to change the context to mail_home_rw_t and tried again. It worked.

It is worth noting that at some point I also did setsebool httpd_can_sendmail=1. Without that, httpd can’t use sendmail,

The WordPress Auto Draft

Disclaimer: I’ve been away from the wordpress world for over a couple of years now, so I don’t know if this is common knowledge in the wordpress circles or not. Heck! I don’t even know if this is something that has always happened, or if it is something that was implemented while I was away from WP, but it is definitely something that threw me off a bit recently.

After some time away from WP, I decided it was time to go back and try all the new features that have arrived to WP since I left it a while back. I started developing a small webapp. I know I could have used something better suited for the task, like Laravel, but the goal here is to get WP to do something that clearly stretches its limits. One of the things I wanted to do was to assign an ID to custom-type posts. I didn’t want to rely on the post’s id, but rather create an md5 hash of the post’s title. The key part was to create it when the post was first created and then never change it again, even if the title changes. The post’s id would have suffice, but remember that this is an exercise to get back into wordpress development.

My first instinct was to use the `save_post_{$post->type}` action hook. This action passes a boolean parameter to the callback function that specifies if the post being saved is an update of an existing one. If it isn’t, we can assume the post being created is brand new. Relying on this boolean seemed like the obvious choice since it would allow me to save the md5 hash only once when the post is first created, and never touch it again. So, I did it this way. However, later, when I decided to display a custom row in the all-posts table that displayed the post’s custom id, I realized that all the posts had the same custom id. How was this possible? Digging into wordpress I found out that every time you open the new post page, a new post is saved in the database with title “Auto Draft” so all my md5 hashes were being generated from that “Auto Draft” title. No wonder they were all the same!

So, how does this happen?

When you first open the new post page (post-new.php) a function named `get_default_post_to_edit` is called, which, among other things, creates a new post in the DB, creates a new post object for it, and passes it along to other functions and hooks. This is actually a good thing because it makes sure you are always working with a real post object that has a DB record, and an ID. However, if you aren’t aware of it, it may get you scratching your head for a little while when something doesn’t happen as you’d expect it to happen. Knowing these kind of subtleties is what makes the difference between a good, and excellent developers.

Read, the docs, but more importantly, read the codes!

Nested Form Input in Drupal

HTML forms are made up of input controls that, when submitted, are converted into an array of values. For example, consider the following form:

<form method="post" action="/">
<input type="text" name="input1" />
<input type="text" name="input2" />
<input type="submit" name="send" value="Send" />

When a user fills out this form, and submits it, the data is sent by the browser using the POST HTTP method in this fashion:

input1: value
input2: value
send: send

If you were to read that with PHP, you would have to access the `$_POST` array, which would contain those key-value pairs. However, some forms are complex enough that flat arrays like that make it hard to work with the data. In those cases it is useful to get multidimensional arrays. HTML allows you to do just that:

<form method="post" action="/">
<input type="text" name="persons[0][name]" />
<input type="text" name="persons[0][address]" />
<input type="text" name="persons[1][name]" />
<input type="text" name="persons[1][address]" />
<input type="text" name="persons[2][name]" />
<input type="text" name="persons[2][address]" />
<input type="submit" name="send" value="Send" />

In PHP that results in the following `​$_POST` array

 [persons] => Array(
   [0] => Array(
     [name] => john
     [address] => smith
   [1] => Array(
     [name] => jane
     [address] => doe
   [2] => Array(
     [name] => jason
     [address] => foo
 [send] => send

As you can see, the information for `persons` is now nicely arranged in an array of arrays, each of which contains the information associated with a person.

If you want to replicate that same structure in Drupal, it is quite easy. Using the Drupal Form Api you can easily create form arrays that Drupal knows how to render. But those forms usually submit data in the flat array form. If you want to get multidimensional array like we did above, you need to use the `#tree` attribute on the element that you want to be the root array member. For example, to get the same input we have above, you would do something like:

$form = [
  'persons' => [
    '#type' => 'markup',
    '#tree' => TRUE,
    '0' => [
      'name' => [
        '#type' => 'textfield'
      'address' => [
        '#type' => 'textfield'
    '1' => [
      'name' => [
        '#type' => 'textfield'
      'address' => [
        '#type' => 'textfield'
    '2' => [
      'name' => [
        '#type' => 'textfield'
      'address' => [
        '#type' => 'textfield'

If you were to omit the `#tree` attribute on the `persons` element in the form array, you would get a flat array of values with name, and address values for the last input fields only. Also note that the nested arrays begin at the point where the `#tree` attribute is specified. For example, if `persons` was itself a child of another element called `personal_data`, the resulting POST data wold remain the same, unless the `#tree` attribute was specified in the `personal_data` element, in which case, the `persons` array would also be child to a `personal_data` member in the `values` array where Drupal saves the submitted data when dealing with form submissions.

404 on Private Drupal Files

A few days ago, it was brought to my attention that some links pointing to private images in a drupal site were not working. The images are submitted by users of the site to request an estimate on repairs for their luxurious cars, and they go to the `private` directory in the drupal site.

For those who don’t know, drupal uses different types of files, such as public, and private. Private files, with the protocol `private://` live in a different place than public ones in the server. When Drupal builds a link to the file, it uses a path that is not really the actual path on disk. The path usually starts with  `system/files/` , but in the drupal directory structure there is no directory called `system`. This means a redirection is made from that url to the actual path of the file in the system.

For some reason, however, when trying to access a private file in the site, all that was returned was a 404 error. This started to happen after the site was moved to a new server. Tracking the problem I found that the issue was that the drupal function drupal_fast_404 was being called in the settings.php file. This function takes care of returning a 404 page whenever the requested url doesn’t match certain criteria. In this case, the fix was to simply edit the 404_fast_paths_exclude to make sure that any path starting with system/files/ and ending with a recognized image extension would not fast 404.

Fixing MySQL Corrupt Table “ERROR 2013 (HY000): Lost connection to MySQL server during query”

Today I tried to drop a database in mysql


Which resulted in a error: “ERROR 2013 (HY000): Lost connection to MySQL server during query”. At that point everything started to go wrong, to the point where the MySQL server went completely down and I was unable to run it again. In short this is how things happened:

  1. Try to drop the database, which resulted in an error
  2. Keep trying to drop it, which resulted in the same error.
  3. Try increasing the net_read_timeout and net_write_timeout on the server, as well as –connect-timeout in the mysql client. No luck.
  4. Search the web, find no useful advice, but found a possible reason “data corruption”
  5. Try to recover data with a backup of the database from the production server. Shit went dow!
  6. The server stopped and refuse to start again. Following the instructions in the stop error lead me to new search terms.
  7. We have a new problem now, do more web searching.
  8. Find a useful page which recommended disabling innodb, and setting myisam as the default database engine.
  9. After doing that, the server worked, but trying to dump the database resulted in an error: ERROR 1010 (HY000): Error dropping database (can’t rmdir ‘./db_name’, errno: 39 “Directory not empty”)
  10. I decided to manually remove the directory, which removed the database.
  11. Create the database again and use the backup to restore it.
  12. Got an error saying that the table ‘table’ already existed. ‘table’ is the first table that the backup file tries to create.
  13. Try to dump the database, but got the same error as in #9.
  14. Manually delete the database again, Create it and the do a proper DROP.
  15. Create the database again and restore from backup. It worked!

I read a few pages during this problem, and found these useful:

SOLVED: InnoDB Error: space header page consists of zero bytes: xampp

MySQL: Error dropping database (errno 13; errno 17; errno 39)

Fixing PHP’s ‘Warning: simplexml_load_file(): I/O warning : failed to load external entity’

Today I had to re-visit a Drupal module I wrote in 2015. The module parses an external XML file which it loads using simplexml_load_file. When I opened the version of the site that lives in my local server, and visited the page that uses the module, I got the error in the title of this entry. I checked php.ini to make sure allow_url_fopen was On. It was.

Searching the web I came to a bug report in the php website. It was suggested to restart the apache server since the contents of resolv.conf may have changed after the server, and php got their DNS servers entry. This made sense to me, since I have switched networks since turning the computer on. I gave it a try, and it worked:

service httpd restart

Other info:

  • The location of php.ini in my case, running Fedora with php installed using dnf, is /var/php.ini
  • The location of resolv.conf is /etc/resolv.conf

The sources I visited:

Opening port 80 with iptables

A few days ago I was contacted by a client whose website had been offline for a month. The site has a history of going offline because of server overloads. The client is OK with the occasional down time as the site is just a pet project. Every time the server goes down we usually restart the MySQL server and everything goes back to normal, but this time it was different. Instead of restarting the MySQL server, the whole server was restarted by power cycling it, and that is when everything went wrong.

When the site came back up, the HTTP and MySQL server had to be manually started, and the program that communicates with the server control panel for status reporting was down as well. It had to also be started manually. However, after all of this, the site was still off line. Trying to access the website resulted in an error like the one you see when you are not connected to the internet.

I decided to curl the site, and in return I got an error saying “No route to host”. Pinging the server worked, but form some reason I could not connect to it, because of this I knew it was not a DNS problem, since the url resolved to the server IP correctly, but the connection was refued. Could it be a server issue?

I decided to take a look at the server config files, only to find out everything was configured correctly. At this point I was absolutely intrigued.

I decided to search the interwebs to find out what people said about the “no route to host” problem, and, as I thought, it was a connection problem. The machine was not refusing the connection, the network was. I confirmed that by fetching localhost from the server. I got the expected result, so the server was not refusing the connection.

It was at this point that I decided to concentrate on the network side of things. I started by determining the port the server was listening on. I did this by checking the server configuration files. When I confirmed the server was listening on port 80, I decided to check if port 80 was open. Using iptables I determined that it was not. From there, it was just a mater of opening the port. I did that by running

iptables -I INPUT 1 -p tcp --dport 80 -j ACCEP

Here are some documents I consulted while trying to solve this problem. Some proved useful, others did not. It is worth mentioning the server runs Fedora.

Some linux FS management tips

A few days back, I decided to upgrade from Fedora 19, to 24, and decided to do so using a usb stick. I downloaded the installer, and copied into the usb stick using dd.

A couple of days later, when I wanted to use that usb stick, I found out I could do nothing with it since its file system was write-protected. Trying to format the device did not work for the same reason. Using the disks utility in Fedora 24 showed 3 partitions in the usb stick, but trying to delete them would result in an error related to block size. The driver reported one size, but linux reported another. After a bit of searching, I came to an ask ubuntu answer that recommended using mkfs.vfat to solve the problem.

  • You can use mkfs.vfat, or one of the other mkfs.* programs to format a stubborn drive, or any other drive for that matter.

Being able to fix this problem, I decided to try with an old usb stick I got at a meet up back when Barnes & Noble were preparing to launch their android-based reader NOOK. The usb stick has never worked. It doesn’t mount, and reviewing it, there is no partition in it, let along a file system. I tried making a partition using the disk utility, but failed. I decided to use parted in the command line, but no luck, the device was write-protected. After a bit of searching, I found out you can use hdparm on write-protected devices to make them writeable. This did not work on my usb stick, which leaves me thinking that the device is just damaged.

  • You can use parted to make partitions in a device
  • You can use hdparm to set various option on drives, like write-protection.

Speaking about partitions, having a usb stick with more than one partition in it can be quite useful, and cool. Just remember that dumb windows doesn’t mount all of them, only the first one.

Other tips:

  • Use mount to find out how a device is mounted.
  • Use dosfsck to check and repair DOS file systems

I hope these tips prove useful at some point in your life as they have done in mine. Thanks to all that share wisdom around the net, what would we do without them…