Note: I wrote this post probably about three or four years ago, I’ve been sat on it for a while because a) I don’t know if it’s all that interesting and b) it might reflect poorly on my server admin skills. In the interest of transparency (and because I’m committed to this one blog a week malarkey) it finally gets published today.
Recently I have dabbling in some server fun, which means I’ve been engaging in activity that would make proper server admins cry with absolute rage.
Let’s just say I have a website that makes use of RESTful URIs. We then have a system where we want to add a domain name but point it at a specific existing URI. For example…
http://www.main-website.com/restful/uri/long-string-of-parameters
…becomes the following…
http://new-domain.com/long-string-of-parameters
I could foresee a potential headache, as in my limited experience of tying domains to Linux servers you normally have to add stuff to sites-available and link it in sites-enabled. That could become an unscalable nightmare since we are looking at adding domains on a regular basis just to mask existing URLs!
Poking around NGINX and the sites-available folder, I found a file called “catch-all” that contained the following…
server { return 404; }
Given that the only other files were for the main site’s config, it seemed obvious that this was a file to deal with “any other traffic”, which was interesting.
In the NGINX config I also noted that there were several “include” statements like the following:
include config-folder/domain-name-here.com/directory/*;
This include seemed to be saying “include any config values from files in this directory”.
Putting this new information together, I wondered if I could pass in a config file with directives for the domains we were looking to add. If I could, this should be easy enough to automate as it would just be a case of generating a new .conf file every time we add a domain!
First, I pointed a test domain that we wanted to add to the system at the remote server’s ip in the domain registrar panel. This caused it to generate the default NGINX 404 page when accessing that domain, which made sense.
In the catch-all config file I added the following line at the top (note this was above the server directive, not inside the curly braces with the 404!):
include /path/to/directory/where/we/will/add/redirects/*; server { return 404; }
This would mean that for any other traffic than the main domain, the config would check that directory for config files, and then just output a 404 if it couldn’t find or match anything.
Then, in a new file in the freshly-created “redirects”1 directory I referenced in the catch-all .conf I added the following to a file:
server { listen 80; server_name new-domain.com; location / { proxy_pass http://www.main-website.com/restful/uri/; } }
Actually, I tell a lie – I originally just put a redirect to the main domain in there and, once I’d restarted NGINX, found that it worked! I only found proxy_pass was a thing after Googling how to “mask” a URL with NGINX.
Please note I originally wanted to place these redirect files in the secure NGINX directory but that threw up a small permissions issue that I will explain in a bit – instead, the folder is placed above the public root but in a location accessible by scripts (PHP, cli, etc.). Please do feel free to tell me how bad an idea this is in the comments, and most importantly why! I need to learn.
So I now had one domain masking another. Using PHP scripts I experimented using exec() to create and populate a file with the details outlined above, and it worked, albeit after reloading the NGINX config each time to pick up on the changes. This wasn’t a problem – the domain redirecting process didn’t have to be instantaneous and I could just set a cronjob to run every hour and reload the config2.
To reload the NGINX config is a one-liner…
service nginx reload
…and doesn’t require NGINX to restart, so the website never has to disappear while everything reloads. I stuck it in a .sh file, gave it executable permissions (chmod +x) and tried running it. Surprise! It didn’t work.
This is because you need to run it with “sudo” for elevated permissions. Bugger. This is pretty much the problem I was having earlier when trying to generate the NGINX config files within the NGINX directory.
You can automate sudo to run on the command line, but in almost every example given it requires either echoing out the root username and password (which doesn’t work if you’re doing it from a PHP script using PHP 5.6+ which, incidentally, is a BAD IDEA) or you can have the username and password on the server in a file and read it in, which is a BAD IDEA.
After much Googling, I found the following article on StackOverflow, which explained that I could just make an exception for that file to run without a password.
I added the line to /etc/sudoers and promptly broke sudo on the server3 because of a syntax error.
Before I proceed any further, let me reiterate: DO NOT EDIT THE SUDOERS FILE FOR ANY REASON, NO, NOT EVEN FOR THAT REASON. NEVER EVER TOUCH IT!
*Sighs*
A lot of solutions to fix the sudoer file involve using pkexec or rebooting the server in recovery mode. For me, the former wasn’t installed (the cruel irony being that I needed sudo to install it) and the latter just completely locked me out as we rely on SSH keys for everything4.
How I did fix it was to power down the box, reset the root password for the server using the hosting panel, boot it back up, “su” into the root account using the new password and then run “visudo” to remove the offending line that broke sudo in the first place. I resolved after that point to never, ever touch the sudoers file again.
How to add the exception line? That’s where sudoers.d comes in. Much the same way I could extend the NGINX config with other config files, you can do the same with the sudoers file.
I created a file called “scripts_permissions” (you can call it anything as long as it has neither a “.” or “~” in it) and added the following:
#Allow nginx to reload config without a password [username] ALL=(root) NOPASSWD: /usr/sbin/service nginx reload
Obviously you need to replace [username] with the user that needs access. This is when I found out that if you create a crontab file, the system will run those crontab functions as the user who created the file (not root)! Sound simple, but it can be confusing as I was putting the wrong user in the rule (I assumed it had to be the root account).
This now means that I could run “sudo service nginx reload” on the command lineand it wouldn’t ask for a password. I set the cronjob to run once an hour during certain times on weekdays and hey presto! It now picks up on any new domain rules that I add to /path/to/directory/where/we/will/add/redirects/ (obviously with a real filepath), albeit on an hourly refresh.
The final point is to generate the redirects file containing the proxy_pass rules and move it to our redirects folder – this is simple enough to do in PHP using the exec() function or, if you want to be extra safe, dump the PHP-generated files to a directory and then have the system move the files to the sensitive directories so scrubby PHP isn’t soiling your file system with its treacherous clumsy fingers.
You can! You can just point a DNS entry for any subdomain at a server and these rules will still allow certain subdomains to proxy_pass specific URLs.
There’s one problem (one which I don’t know the cause of) – say you have the following subdomains:
If you have a wildcard DNS entry and then try “test3.otherdomain.com” in your address bar with the above two subdomains in place using the setup outlined here, it will bafflingly point at one of the proxy_pass rules despite there only being rules for “test1” and “test2”. I have a feeling it points at the last rule that gets loaded, but I haven’t tested this.
The solution I found was to create another, separate rule just for the domain as a whole:
server { listen 80; server_name *.otherdomain.com; return 404; }
This blocks all unwanted wildcard traffic and still allows the other rules to work for proxy_pass.
Ah, I had this problem too. We thought we’d got hacked or blocked by Google or something. As it turns out, basically we added a config file and the server decided that it should prioritise it over the main configuration file. Our website was basically masking all traffic via a different domain, which was really confusing the app.
It’s a really, really simple fix. You basically need to add ‘default_server’ to your main config so that NGINX knows which of the myriad of config files is the one it should prioritise.
Post by Sean Patrick Payne+ | February 9, 2020 at 12:00 pm | Articles, Portfolio and Work | No comment
Tags: 404 not found, domain masking, how to break a server by faffing with the sudoers file, Linux, NGINX, server tinkering
Return to Viewing Webpage