If you're used to the Docker world of running containers, you'll know that by default containers use a bridge, living in a separate network on a host, and using NAT to allow them to communicate to the outside world.

When you hear the word "NAT", you should immediately know that only communications initialized from the containers are allowed to receive connections back.

When you want to make particular ports in your container available, you typically use port-mapping / published option "-p / --publish". However, those ports will only be accessible on localhost, unless you specify a particular IP address associated with the host.

At home, I have a little server running quietly in a corner of my apartment on which I run several containers through Podman. Since I don't want to go through the hassle of dealing with a separate network on a bridge and circumvent the NAT issue, or be limited by binding the container to the host's own network namespace, I decided to use macvlans.


Macvlans essentially provide a virtual network interface with its own mac-address, attached to a physical interface. This will allow multiple containers to have their own IP and mac-address, but more importantly, be able to expose all their ports as if they were  directly attached on my home network.

Note: while Macvlans are easy to set up, the host system where the containers are running on will not be able to connect to the services in the containers. If don't want that you'll have to look into using a proper bridge. I'll show you this at the bottom.

To this end, I first created a CNI configuration file, describing the network I want to create.

# /etc/cni/net.d/host_local.conflist

{
   "cniVersion": "0.4.0",
   "name": "host_local",
   "plugins": [
      {
         "type": "macvlan",
         "master": "enp3s0",
         "ipam": {
            "type": "host-local",
            "ranges": [
                [
                    {
                        "subnet": "192.168.50.0/24",
                        "rangeStart": "192.168.50.2",
                        "rangeEnd": "192.168.50.254",
                        "gateway": "192.168.50.1" 
                    }
                ]
            ],
            "routes": [
                {"dst": "0.0.0.0/0"}
            ]
         }
      },
      {
         "type": "tuning",
         "capabilities": {
            "mac": true
         }
      }
   ]
}

In this example, I define a network configuration called "host_local", of type "macvlan", have it attached to the physical network port of the server "enp3s0" used to connect to my home network. The rest are the network details of my local network.

With this setup, I can start a container through Podman, and have it use a specific IP address and MAC on the network, and you'll be able to reach it from wherever on the LAN. Just make sure those addresses are not serviced by your home router's DHCP server to avoid handing out already used addresses.

Example of running the Unifi Controller container, this is important because with a NATted bridge, your devices would not be able to get provisioned because they contact the controller directly:

podman run \
  -d \
  --privileged \
  --name unifi-controller \
  --dns 192.168.50.2 \
  --dns-search lan \
  --net host_local \
  --ip 192.168.50.12 \
  --mac-address 2A:7C:AA:ED:A2:AF \
  -e PUID=1000 \
  -e PGID=1000 \
  -p 3478:3478/udp \
  -p 10001:10001/udp \
  -p 8080:8080 \
  -p 8081:8081 \
  -p 8443:8443 \
  -p 8843:8843 \
  -p 8880:8880 \
  -p 6789:6789 \
  -v unifi-controller:/config \
  linuxserver/unifi-controller

Another example of running Home-Assistant:

podman run \
  -d \
  --privileged \
  --name home-assistant \
  --dns 192.168.50.2 \
  --dns-search lan \
  --net host_local \
  --ip 192.168.50.11 \
  --mac-address 2A:7C:AA:ED:A2:AE \
  -v home-assistant:/config \
  homeassistant/home-assistant:stable

Note: If you don't want to use static IP address but work with DHCP, it's possible: https://www.redhat.com/sysadmin/leasing-ips-podman. There is however a big annoyance/bug: every time you recreate a container (when you update it for example), it gets a new mac-address, and it completely ignores passing a static one through --mac-address, but this is being tracked on Github, and hopefully at some point it will be working as expected.


If you want to allow the host to be able to connect to your containers, you'll have to use a different setup.

You'll need to set up a proper bridge. For this, you should read through the proper documentation for your distribution or the subsystem you're using for handling your networking.

In my case, I fully leverage systemd-networkd to handle all the networking.

# /etc/systemd/network/bridge.netdev 
[NetDev]
Name=br0
Kind=bridge

# /etc/systemd/network/bridge.network
[Match]
Name=br0

[Network]
DHCP=both

# /etc/systemd/network/bind.network
[Match]
Name=enp3s0

[Network]
Bridge=br0

Take care that there aren't any other configuration files managing the IP addresses of the interface you're using to connect to your network.

Restart the networking: systemctl restart systemd-networkd

If all went well, your system should have created a bridge called br0 and have received a new IP address (if you rely on static DHCP, your system will likely have received a different IP address because the bridge interface has a different mac-address).

[root@shuttle ~]# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
enp3s0           UP             
enp5s0           DOWN           
br0              UP             192.168.50.101/24 2a02:1812:1615:1cf0::295/128 fd13:8df3:eed::295/128 fd13:8df3:eed:0:4cff:c6ff:feda:8e40/64 2a02:1812:1615:1cf0:4cff:c6ff:feda:8e40/64 fe80::4cff:c6ff:feda:8e40/64 
wlp0s29u1u4      DOWN           
vethbb99ab30@if3 UP             fe80::7075:8fff:fe22:6f5b/64 
veth200d13e4@if3 UP             fe80::c030:5cff:feb6:c77e/64 

Now the only thing that's left is to create a slightly different CNI configuration file than the one we created before, instead of using macvlan, we specify bridge and change one option.

# /etc/cni/net.d/bridge_local.conflist
{
   "cniVersion": "0.4.0",
   "name": "bridge_local",
   "plugins": [
      {
         "type": "bridge",
         "bridge": "br0",
         "ipam": {
            "type": "host-local",
            "ranges": [
                [
                    {
                        "subnet": "192.168.50.0/24",
                        "rangeStart": "192.168.50.2",
                        "rangeEnd": "192.168.50.254",
                        "gateway": "192.168.50.1" 
                    }
                ]
            ],
            "routes": [
                {"dst": "0.0.0.0/0"}
            ]
         }
      },
      {
         "type": "tuning",
         "capabilities": {
            "mac": true
         }
      }
   ]
}

Don't forget to recreate your containers with --net bridge_local, otherwise you won't see the magic happen.

Another container example:

podman run \
  -d \
  --privileged \
  --name firefly \
  --dns 192.168.50.2 \
  --dns-search lan \
  --net bridge_local \
  --ip 192.168.50.13 \
  --mac-address 2A:7C:AA:ED:A2:B1 \
  -e APP_KEY="some random key, please don't use this" \
  -e DB_HOST=postgres \
  -e DB_PORT=5432 \
  -e DB_DATABASE=firefly \
  -e DB_USERNAME=firefly \
  -e DB_PASSWORD="wouldn't you like to know" \
  -v firefly_export:/var/www/firefly-iii/storage/export \
  -v firefly_upload:/var/www/firefly-iii/storage/upload \
  jc5x/firefly-iii:stable

You can now verify that it's working by trying to curl to the container on a port which you know should work.

[root@shuttle ~]# podman ps | grep firefly
8c92b959aded  docker.io/jc5x/firefly-iii:stable                        18 minutes ago  Up 18 minutes ago                                                  firefly
[root@shuttle ~]# curl -HEAD firefly.lan
<!DOCTYPE html>
<html>
    <head>
        <meta charset="UTF-8" />
        <meta http-equiv="refresh" content="0;url='http://firefly.lan/login'" />

        <title>Redirecting to http://firefly.lan/login</title>
    </head>
    <body>
        Redirecting to <a href="http://firefly.lan/login">http://firefly.lan/login</a>.
    </body>
</html>