Puppet module handling a fully automatic Proxmox installation over a fresh Debian install.
- Description
- Setup - The basics of getting started with proxmox
- Usage - Configuration options and additional functionality
- Limitations - OS/provider compatibility, etc.
- Development - Guide for contributing to the module
The purpose of this module is to provision Proxmox servers at providers who don't offer it preinstalled. You just order a Debian, run puppet and voila! You have a Proxmox server.
We do not plan on adding features for anything that can be done via Proxmox's web interface or an existing puppet module, like example42/puppet-network.
The module installs and configures a default Proxmox server. Just doing that changes a lot of things during the installation process (just look at how long it takes). The resulting product of a successful puppet run should no longer be considered as a Debian server, but a Proxmox server. They have a lot in common, but when you have a specific problem or need, go to Proxmox's documentation first.
Warning The module will reboot your server once the puppet run is done. This is necessary to switch to the PVE kernel.
- A clean Debian install
- A correct hostname configuration:
/etc/hosts file should at least contain IPv4 config:
127.0.0.1 localhost.localdomain localhost
<public_server_ip> proxmox.domain.com proxmox
<puppetserver_ip> puppet
/etc/hostname should just contain the fqdn (proxmox.domain.com)
- Install puppet-agent
- puppet agent -t (the server reboots a few seconds after the install is finished)
- After the server pings again, go to https://proxmox.domain.com:8006, ignore the "security" warning, connect with your root password and maybe start by generating a valid certificate with letsencrypt, it's included in proxmox settings ;-)
- Happy Proxmoxing!
include proxmox
By default, the module doesn't touch the network configuration, to allow you to configure just the way you want. We recommend you use example42/puppet-network, for example:
# Public network bridge, ipv4
network::interface { 'vmbr0':
family => 'inet',
ipaddress => $::ipaddress,
netmask => $::netmask,
gateway => $facts['gatewayv4'],
bridge_ports => [ $facts['netdev'] ],
bridge_stp => 'off',
bridge_fd => 0,
}
# Private network bridge, ipv4
network::interface { 'vmbr1':
family => 'inet',
address => '10.0.1.1/24',
bridge_ports => ['none'],
bridge_stp => 'off',
bridge_fd => 0,
post_up => [
'echo 1 > /proc/sys/net/ipv4/ip_forward',
'iptables -t nat -A POSTROUTING -s \'10.0.1.0/24\' -o vmbr0 -j MASQUERADE',
],
post_down => [
'iptables -t nat -D POSTROUTING -s \'10.0.1.0/24\' -o vmbr0 -j MASQUERADE',
],
}
This will create:
- vmbr0 is the public network, where you can use your additional/failover IPs for your load balancer, firewall, etc... Any public VM needs an interface here.
- vmbr1 is the private network, for the application/database/backend VMs that don't need to be acessible directly from the internet. Connect an interface here and you get:
- a private IP (10.0.1.0/24) by DHCP,
- access to the internet through NAT,
- local DNS resolution so everyone can find their friends
If you want to use the private network, you need at least one VM with an interface on both bridges to act as firewall, load balancer, VPN, SSH relay, whatever... Otherwise, you'll only be able to access VMs on the private network through a VNC console, or with SSH through the physical host, which will pass the key through to the VM:
ssh -J my.physical.host [email protected]
- Debian 10 for Proxmox 6
- Debian 11 for Proxmox 7
- Upgrade from one version to another is not supported, only clean installs. If you want to help with that, pull requests are welcome ;-)
Start by submitting an issue that explains what you want to do. Branch if you are in the org, fork if you are not. Then, make a pull request.
After the feature branches have been merged and you are happy with the state of 'master', it's time to release the module to the Forge, remember we use SemVer...
- Create a branch from master and update:
- CHANGELOG.md
- metadata.json
- Create a Pull Request
- The person who merges the PR has a little CLI work to do:
git checkout master
git pull
git tag X.Y.Z
git push --tags