Running BIND9 at home
Quick introduction to BIND9
BIND9 (Berkeley Internet Name Domain) is one of the most widely used DNS servers in the world, it’s been battle-tested by large organizations and ISPs alike, and it can handle a large volume of DNS queries.
By installing and configuring BIND9 on your home network, you can have complete control over DNS resolution for all devices in your home and customize it to your needs. This will allow you to speed up DNS queries by caching frequently accessed domain names, resulting in better percieved speed.
Also, I’ve always wanted to manage my own DNS zones and create entries for various services and machines I run at home.
Prerequisites
I chose to run BIND9 inside Docker so that my configuration can just be a couple files I store on git instead of a full blown Ansible playbook.
You do however need a VM or a server with a static IP on your LAN. This is really easy to do with most Linux distros.
On Debian Bullseye, in /etc/network/interfaces
:
auto eth0
iface eth0 inet static
address 192.168.0.4
netmask 255.255.255.0
gateway 192.168.0.1
broadcast 192.168.0.255
Then, run systemctl restart networking
. Make sure you don’t lock yourself out
if you’re connected via ssh.
Installation
After that’s done, we can move on to the actual DNS server. You need Docker and
docker-compose (now bundled with the docker
command) installed.
Here is my docker-compose.yml
file:
version: '3'
services:
bind9:
image: 'ubuntu/bind9:latest'
environment:
- BIND9_USER=root
ports:
- '53:53/tcp'
- '53:53/udp'
volumes:
- ./config:/etc/bind
- ./cache:/var/cache/bind
restart: unless-stopped
Contrary to what some people believe, DNS doesn’t exclusively use UDP. If someone requests a zone transfer and the zone is too large to fit inside a single UDP packet, it will use TCP instead. That’s why we need to forward both port 53/udp AND 53/tcp.
Now for the configuration file:
config/named.conf
:
acl internal {
127.0.0.1;
192.168.0.0/24; # Replace this with your IPv4 subnet
beef::0/64; # Replace this with your IPv6 subnet
};
controls {
# Control channel for rndc
inet 127.0.0.1 port 953 allow { internal; } keys { "axfr."; };
};
options {
forwarders {
# You can use your ISP's DNS servers or just Cloudflare's 1.1.1.1
1.1.1.1;
8.8.8.8;
};
allow-query { internal; };
allow-recursion { internal; };
dnssec-validation auto;
recursion yes;
};
# This key was generated using the
# `tsig-keygen <name of the key> > <name of the key>.key` command
key "axfr." {
algorithm hmac-sha256;
secret "<redacted>";
};
# Manual zone
zone "lan" IN {
type master;
file "/etc/bind/lan.zone";
allow-update {
key "axfr.";
};
zone-statistics yes;
};
And finally a zone file, config/lan.zone
:
$ORIGIN .
$TTL 300 ; 5 minutes
lan IN SOA ns.lan. nobody.lan. (
21 ; serial
43200 ; refresh (12 hours)
900 ; retry (15 minutes)
1814400 ; expire (3 weeks)
7200 ; minimum (2 hours)
)
NS ns.lan.
$ORIGIN lan.
ns A 192.168.0.4
home CNAME server
jenkins A 192.168.0.6
server A 192.168.0.2
There’s a lot going on in this file so let’s break it down:
- SOA records define the “Start of authority” of a zone. I’m not going to go into much detail about that here but you can find a lot of information about it online
- A records map a hostname to an IPv4 address
- CNAME records map a hostname to another hostname to create an alias (e.g. an apache2 server with multiple virtual hosts)
- NS records indicate what DNS server has authority over a certain zone, in this
case, we’re defining ns.lan as the authoritative DNS server for the zone
lan.
You may have noticed that we didn’t define a TTL for each record. That’s because we set it to 300 at the top of the file with the $TTL directive. You can override this per-record if you want.
$ORIGIN .
basically means “the origin for all the records I’m going to define
from now on all end in .
“, .
being the DNS root that all zones end in.
Ideally, you’d want a reverse zone for each zone you define but that’s not really needed for a home network so you can skip that step.
Make sure the host you’re running this container on is accessible from the rest
of your network and you should be good to go with just a docker compose up -d
.
Testing and debugging your configuration
One really handy tool when it comes to DNS is the dig
command. You can easily
install it on Linux and macOS. Here is how to test out your newly created DNS
server:
dig @192.168.0.4 A server.lan.
(replace 192.168.0.4 with the actual IP of your DNS server)
If it returns the actual IP address you defined for that record, your DNS server is working as expected!
If it did not, then you can debug your DNS server by looking at the logs with
docker logs <id of your container>
.
DHCP configuration
There is one last step to this process however: How do your devices know what DNS server to use? The answer is DHCP (for IPv4 at least).
Now your DHCP configuration may vary depending on your ISP router’s web interface, or if you’re running your own router behind a modem, etc… so I’ll leave this step for you to figure out but all you need to do is to set your primary DNS server to the IPv4 of the machine you’re running BIND9 on, and another server on the public internet as your secondary DNS server in case your docker host goes down for whatever reason.
Closing thoughts
If you want to be fancy, you can set up a DNS slave on another machine and use that as your secondary DNS server, but for simplicity sake, I decided to only cover a simple master-only setup here.