Introduction
This page describes the installation of OpenVZ on "Ubuntu Server" as a host. In the Hardy release of Ubuntu, the OpenVZ packages are in the "universe" component, which does not have guarantees of support. Note that KVM is the main virtualization technology supported in Ubuntu.
To properly implement the practical steps found in this guide, the reader should be a user of Ubuntu who is comfortable with the use of command-line applications, using the Bourne Again SHell (bash) environment, and editing system configuration files with their preferred text editor application.
About OpenVZ
OpenVZ is a server virtualization solution for Linux. It enables one to create multiple virtual Linux servers which are isolated from the host and from each other, based on a technique called "Operating System Virtualization". Similar techniques are used in Solaris Zones, Linux-VServer and FreeBSD jails. This technique does not use hardware virtualization like KVM, XEN or VMware. The so called "Virtual Servers" or VPSs behave like stand alone servers. They consume less resources than their hardware virtualized counterparts, but must use the same kernel as the host. Therefor you can only have Linux VPSs on a Linux host.
The original documentation can be found here: http://openvz.org/
Alternates to OpenVZ
LXC and Xen are alternatives to OpenVZ.
Installing OpenVZ
* OpenVZ is supported on Ubuntu only for the 8.04 version.
If you are looking for a host node more recent then Ubuntu 8.04 try Proxmox (Proxmox is Debian), Debian, or Centos. In general, OpenVZ support is better on .rpm systems first, Debian second.
If you are interested in seeing OpenVZ on .deb systems, please consider working with the OpenVZ project as the OpenVZ kernel patch is not maintained by the Ubuntu developers.
8.04 Hardy
- Install the kernel and tools
$ sudo apt-get install linux-openvz vzctl
Important! Please, make sure that you are using at least the linux-image-2.6.24-19-openvz kernel which is the first really stable kernel without basic usability issues.
- Reboot into the openvz kernel
Remove the -server kernel or the -generic if you are on a desktop machine
$ sudo apt-get remove --purge --auto-remove linux-image-.*server
Change the sysctl variables in /etc/sysctl.conf
# On Hardware Node we generally need # packet forwarding enabled and proxy arp disabled net.ipv4.conf.default.forwarding=1 net.ipv4.conf.default.proxy_arp=1 net.ipv4.ip_forward=1 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # TCP Explict Congestion Notification #net.ipv4.tcp_ecn = 0 # we do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0
- Apply the sysctl changes
$ sudo sysctl -p
- Create a symlink to /vz because most of the vz tools expects the OpenVZ folders to reside there. This step is not necessary, but can eliminate further problems when other vz related components are installed.
$ sudo ln -s /var/lib/vz /vz
10.04 LTS (Lucid)
The information below is old. Follow Install kernel from RPM on Ubuntu 10.04.
Before you begin, please remeber that:
- OpenVZ was only supported in Ubuntu 8.04 LTS (Hardy Heron), and not in Ubuntu 10.04 (Lucid Lynx)
- At writing moment (november 2010) OpenVZ patch for kernel Linux 2.6.32 works, but is not yet released as stable, and for this reason is not recommended for production environments.
- Ext4 filesystem is not yet supported by vzquota, then if you want to have host kernel stability, better use Ext3 for containers space in /vz or /var/lib/vz
Be sure that the system is up-to-date (also kernel)
sudo apt-get update sudo apt-get upgrade sudo apt-get upgrade # Reboot if kernel update
We need now Bash as default Shell. In the next screen, select NO to install dash.
sudo dpkg-reconfigure dash
Install Required Packages For Kernel Compilation
sudo apt-get install kernel-package libncurses5-dev fakeroot wget bzip2 module-assistant debhelper build-essential
Select OpenVZ compile configuration (only one of them):
# For 32bit platform, limited to 3GiB of RAM Variant="linux-image-generic" VersionAppendix="-openvz" MyConfigFile="kernel-2.6.32-i686.config.ovz" # For 32bit platform, PAE for big memory Variant="linux-image-generic-pae" VersionAppendix="-openvz-pae" MyConfigFile="kernel-2.6.32-i686-PAE.config.ovz" # For 64bit platform Variant="linux-image-server" VersionAppendix="-openvz" MyConfigFile="kernel-2.6.32-x86_64.config.ovz"
Satisfy the build dependencies for the source package
Package="$(apt-cache showpkg $Variant | grep "^2\.6\.32" | grep "linux-image")" Package=$(ReturnWord () { echo $3; }; ReturnWord $Package) sudo apt-get build-dep --no-install-recommends $Package
Prepare linux headers
sudo m-a prepare
Create configuration for kernel compiler
sudo kernel-packageconfig
Optimize compiler multi-core usage (only one time)
Cores=$(Nr () { echo $#; }; Nr $(grep "processor" /proc/cpuinfo | cut -f2 -d":")) echo "CONCURRENCY_LEVEL := $(($Cores + 1))" | sudo tee -a /etc/kernel-pkg.conf
Get Ubuntu Linux kernel source code for 2.6.32
cd /usr/src sudo wget http://archive.ubuntu.com/ubuntu/pool/main/l/linux/linux_2.6.32.orig.tar.gz
Get OpenVZ patch for kernel (you can see here if the file has changed)
cd /usr/src sudo wget http://download.openvz.org/kernel/branches/2.6.32/current/patches/patch-feoktistov.1-combined.gz
Now download:
cd /usr/src sudo wget http://download.openvz.org/kernel/branches/2.6.32/current/configs/$MyConfigFile
Unpack the Kernel-Source
cd /usr/src sudo rm -fR linux-2.6.32 sudo tar -xpf linux_2.6.32.orig.tar.gz sudo rm -fR "linux-2.6.32$VersionAppendix" sudo mv linux-2.6.32 "linux-2.6.32$VersionAppendix" sudo rm linux sudo ln -s "linux-2.6.32$VersionAppendix" linux
Apply OpenVZ patch and configuration
cd /usr/src/linux sudo gunzip -dc /usr/src/patch-feoktistov.1-combined.gz | sudo patch -p1 --batch sudo cp -f "/usr/src/$MyConfigFile" .config sudo make oldconfig
Fix some bugs:
Edit file Documentation/lguest/Makefile and change
all: lguest clean:
to
all: clean:
(or you'll get a next compilation error with eventfd.h and zlib.h)
Now the long kernel compilation and pack:
cd /usr/src/linux sudo make-kpkg --initrd --append-to-version=$VersionAppendix --revision=1 kernel_image kernel_headers
Install the Kernel
cd /usr/src ls -l *.deb sudo dpkg -i linux-image-2.6.32.28-openvz_1_amd64.deb sudo dpkg -i linux-headers-2.6.32.28-openvz_1_amd64.deb
Create a Initramfs and update Grubs menu.lst or grub.cfg (check the "2.6.32.28-openvz" string according to generated packages)
sudo mkinitramfs -k 2.6.32.28-openvz -o /boot/initrd.img-2.6.32.28-openvz sudo update-grub
Create a new file, /etc/sysctl.d/10-openvz.conf with the following sysctl variables This step might not be necessary once the vzctl package is going to be updated
### Optimized for Ubuntu 10.04 # vim:ft=sysctl # sysctl config for OpenVZ #net.ipv4.ip_forward=1 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.default.proxy_arp = 1 net.ipv4.ip_forward = 1 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # TCP Explict Congestion Notification #net.ipv4.tcp_ecn = 0 # we do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 0
- Apply the sysctl changes
$ sudo sysctl -p /etc/sysctl.d/10-openvz.conf
Install OpenVZ management tools
sudo apt-get install --no-install-recommends vzctl vzquota vzdump
(only when you know the Ext4 support is complete and stable, may want to compile from sources)
Create a Symlink to be FHS-compliant
sudo ln -s /var/lib/vz /vz
If you are using ext4, you almost certainly will encounter a kernel panic when starting a container. Some people mounts filesystem with a 'nodelalloc' option in /etc/fstab , but instead of kernel panic the system can freeze or collapse (See here: http://bugzilla.openvz.org/show_bug.cgi?id=1509)
For the moment, the only alternative to use Ext4 is to set DISK_QUOTA=no in /etc/vz/vz.conf (then space quotas haven't effect to containers)
Reboot into your new OpenVZ-Kernel
sudo reboot
Check your running Kernel
sudo uname -rvo
This Command should give something like this:
2.6.32.28-openvz #1 SMP Tue Sep 24 13:07:07 CEST 2010 GNU/Linux
Ensure that all is fine now.
sudo ps ax | grep -v "grep" | grep "vzmond"
This should give some like:
3890 ? S 0:00 [vzmond]
Congratulation. You're now running OpenVZ on Ubuntu 10.04 LTS
- If you need FUSE mounts (such as SSHFS) and don't see the fuse module loaded (lsmod | grep fuse), you can enable it with:
sudo modprobe --first-time fuse echo "fuse" | sudo tee -a /etc/modules
To allow a container to mount fuse devices, you need to give it permissions (container may need to be restarted):
sudo vzctl set 777 --devnodes fuse:rw --save
OpenVZ Guests
Template(s)
Before we can create a new Virtual Private Server, we first have to either download or create a template of the distro we want to use. OpenVZ uses "templates" or "cached templates". The difference is that "templates" are a sort of cookbook for "cached templates" A package manager is then used to download and create the cached template of the chosen distribution. Because most cached versions of popular distro's are already created and not that big, it is easiest to download the cached version and place it in the "/var/lib/vz/template/cache" directory (or the path you have chosen in the "/etc/vz/vz.conf" file).
"Official" cached templates can be found here http://openvz.org/download/template/cache/
"Community" or "contrib" templates can be found here http://ftp.openvz.org/template/precreated/contrib/
BodhiZazen 's - OpenVZ templates - These templates were submitted to the OpenVZ "contrib" set of templates.
Once you have downloaded a template (for example ubuntu-8.04-i386-minimal.tar.gz) and placed it in "/var/lib/vz/template/cache" you can install it using the following command:
sudo vzctl create 777 --ostemplate ubuntu-8.04-i386-minimal
In the example below CT ID of 777 is used; of course any other non-allocated ID could be used.
The section below explains how to create your own cached template. If you installed a default one as explained above, continue to #Administration to learn how to start and enter your new node.
Create Template
For more updated instructions on Ubuntu OpenVZ template creation see: bodhi.zazen's blog, Ubuntu 10.04 OpenVZ Template Creation
- Previous blog entries cover Ubuntu 9.10 -
This section describes how to create an Ubuntu 8.04 Hardy minimal template. This information is somewhat dated and are biased on the Openvz wiki - Debian template creation.
Documentation format:
- Run the command on the OpenVZ host system
[HW] $ command
- Run the command on the OpenVZ container
[VPS] $ command
Prerequisites
- debootstrap
[HW] $ sudo apt-get install debootstrap
Creating template
Running debootstrap
- Create a working directory:
[HW] $ mkdir hardy-chroot
- Run debootstrap to install a minimal Hardy Heron system into that directory:
[HW] $ sudo debootstrap [--arch ''ARCH''] hardy hardy-chroot
If the ARCH of the host machine is equal to the one of the container, you can skip the --arch option, but if you need to build an OS template for another ARCH, specify it explicitly:
for AMD64/x86_64, use amd64
for i386 i386
Preparing/starting a container
Now you have an installation created by debootstrap, you can run it as a container. In the example below CT ID of 777 is used; of course any other non-allocated ID could be used.
- Moving installation to container private area
[HW] $ sudo mv hardy-chroot /vz/private/777
- All files needs to be owned by root
[HW] $ sudo chown -R root /vz/private/777
- Setting initial container configuration
[HW] $ sudo vzctl set 777 --applyconfig vps.basic --save
Setting container's OSTEMPLATE
[HW] $ echo "OSTEMPLATE=ubuntu-8.04" | sudo tee -a /etc/vz/conf/777.conf >/dev/null
- Setting container's IP address. (This is just a temporary setting for the update process to work)
[HW] $ sudo vzctl set 777 --ipadd x.x.x.x --save
- Setting DNS server for the container (This is just a temporary setting for the update process to work)
[HW] $ sudo vzctl set 777 --nameserver x.x.x.x --save
Removing udev from the /etc/rcS.d and klogd from the /etc/rc2.d folders
If udev was in place the container might not start, it could be stuck and even vzctl enter would not be able to access the container's command line. If klogd was in place it might not let the runlevel 2 change finish.
[HW] $ sudo rm /vz/private/777/etc/rcS.d/S10udev /vz/private/777/etc/rc2.d/S11klogd
- Starting the container
[HW] $ sudo vzctl start 777
Modify the installation
- Enter a container:
[HW] $ vzctl enter 777
Warning!!! Do not run the commands below on the hardware node, they are only to be run within the container!
Note: You will not need to use sudo within the container, you enter as root when you use vzctl enter.
- Remove unnecessary packages:
[VPS] $ apt-get remove --purge busybox-initramfs console-setup dmidecode eject \ ethtool initramfs-tools klibc-utils laptop-detect libiw29 libklibc \ libvolume-id0 mii-diag module-init-tools ntpdate pciutils pcmciautils ubuntu-minimal \ udev usbutils wireless-tools wpasupplicant xkb-data tasksel tasksel-data
Note: If you want to use the tasksel tool, do not remove it — but then you have to let laptop-detect stay.
Note: On removing the deb-package module-init-tools, a fake-modprobe is needed for IPv6 addresses, see below!
- The DHCP client can be also removed if you know that you will not need it.
[VPS] $ apt-get remove --purge --auto-remove dhcp3-client dhcp3-common
- Clean up after udev
[VPS] $ rm -fr /lib/udev
- Disable getty On a usual Linux system, getty is running on a virtual terminals, which a container does not have. So, having getty running doesn't make sense; more to say, it complains it can not open a terminal device and this clutters the logs.
[VPS] $ initctl stop tty1 [VPS] $ initctl stop tty2 [VPS] $ initctl stop tty3 [VPS] $ initctl stop tty4 [VPS] $ initctl stop tty5 [VPS] $ initctl stop tty6 [VPS] $ rm /etc/event.d/tty* [VPS] $ rm /etc/init/tty*
- Set sane permissions for /root directory
[VPS] $ chmod 700 /root
- Disable root login
[VPS] $ usermod -p ‘!’ root
- "fake-modprobe" needed for IPv6 addresses
[VPS] $ ln -s /bin/true /sbin/modprobe
- On setup IPv6, the command "modprobe -Q IPv6" is called, which fails without the "fake-modprobe"
- Set the default repositories for Hardy
Make sure that you replace the <YOURCOUNTRY> with the your country code
[VPS] $ COUNTRY=<YOURCOUNTRY>. cat >/etc/apt/sources.list <<EOF # Binary deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse # Binary Canonical # deb http://archive.canonical.com/ubuntu hardy partner # Binary backport # deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse # Source # deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse # deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse # deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse # Source backport # deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse # Source Canonical # deb-src http://archive.canonical.com/ubuntu hardy partner EOF
- Note: Only the "main restricted universe multiverse" binary repositories are enabled. Change it if you need more.
- Apply new security updates
[VPS] $ apt-get update && apt-get upgrade
- Install some more packages
[VPS] $ apt-get install ssh quota
- Fix SSH host keys This is only useful if you installed SSH above. Each individual container should have its own pair of SSH host keys. The code below will wipe out the existing SSH keys and instruct the newly-created container to create new SSH keys on first boot.
[VPS] $ rm -f /etc/ssh/ssh_host_* [VPS] $ cat << EOF > /etc/rc2.d/S15ssh_gen_host_keys #!/bin/sh ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -N '' ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -N '' rm -f \$0 EOF [VPS] $ chmod a+x /etc/rc2.d/S15ssh_gen_host_keys
Link /etc/mtab to /proc/mounts, so df and friends will work:
[VPS] $ rm -f /etc/mtab [VPS] $ ln -s /proc/mounts /etc/mtab
After that, it would make sense to disable mtab.sh script which messes with /etc/mtab
[VPS] $ update-rc.d -f mtab.sh remove
- Disable some services In most of the cases you don't want klogd to run -- the only exception is if you configure iptables to log some events -- so you can disable it.
[VPS] $ update-rc.d -f klogd remove
- Set default hostname
[VPS] $ echo "localhost" > /etc/hostname
Set /etc/hosts
[VPS] $ echo "127.0.0.1 localhost.localdomain localhost" > /etc/hosts
Add ptys to /dev
This is needed in case /dev/pts will not be mounted after container start. In case /dev/ttyp* and /dev/ptyp* files are present, and LEGACY_PTYS support is enabled in the kernel, vzctl will still be able to enter the container.
[VPS] $ cd /dev && /sbin/MAKEDEV ptyp
- Remove nameserver(s)
[VPS] $ > /etc/resolv.conf
- Clean aptcahche
[VPS] $ apt-get clean
- Cleaning up log files
[VPS] $ > /var/log/messages; > /var/log/auth.log; > /var/log/kern.log; > /var/log/bootstrap.log; \ > /var/log/dpkg.log; > /var/log/syslog; > /var/log/daemon.log; > /var/log/apt/term.log; rm -f /var/log/*.0 /var/log/*.1
- Exit the container
[VPS] $ exit
Preparing for and packing template cache
The following commands should be run on the host system (i.e. not inside a container).
- We don't need an IP for the container anymore, and we definitely do not need it in template cache, so remove it
[HW] $ sudo vzctl set 777 --ipdel all --save
- Stop the container
[HW] $ sudo vzctl stop 777
- Change dir to the container private
[HW] $ cd /vz/private/777
Now create a cached OS tarball. In the command below, you'll want to replace <arch> with your architecture (i386, amd64).
Note the space and the dot at the end of the command.
[HW] $ sudo tar -czf /vz/template/cache/ubuntu-8.04-<arch>-minimal.tar.gz .
- Cleanup
[HW] $ sudo vzctl destroy 777 [HW] $ sudo rm -f /etc/vz/conf/777.conf.destroyed
Testing template cache
We can now create a container based on the just-created template cache. Be sure to change arch to your architecture just like you did when you named the tarball above.
[HW] $ sudo vzctl create 123456 --ostemplate ubuntu-8.04-<arch>-minimal
- Now make sure that your new container works
[HW] $ sudo vzctl start 123456 [HW] $ sudo vzctl exec 123456 ps axf
- You should see that a few processes are running.
- Cleanup
[HW] $ sudo vzctl stop 123456 [HW] $ sudo vzctl destroy 123456 [HW] $ sudo rm -f /etc/vz/conf/123456.conf.destroyed
9.10 (Karmic) VPS
Create openvz.conf in /etc/init and fix init sequence to have OpenVZ working with upstart. Original reference.
[VPS] # cat << EOF > /etc/init/openvz.conf description "Fix OpenVZ" start on startup task pre-start script mount -t proc proc /proc mount -t devpts devpts /dev/pts mount -t sysfs sys /sys mount -t tmpfs varrun /var/run mount -t tmpfs varlock /var/lock mkdir -p /var/run/network touch /var/run/utmp chmod 664 /var/run/utmp chown root.utmp /var/run/utmp if [ "$(find /etc/network/ -name upstart -type f)" ]; then chmod -x /etc/network/*/upstart || true fi end script script start networking initctl emit filesystem --no-wait initctl emit local-filesystems --no-wait initctl emit virtual-filesystems --no-wait init 2 end script EOF
Check /bin/sh symlinked to bash?:
# file /bin/sh /bin/sh: symbolic link to `bash'
Fix the "init: tty1 main process ended, respawning" syslog message
[VPS] # find /etc/init/ -maxdepth 1 -type f -name tty\* -print0 | /usr/bin/xargs -r0 -i -t sed -i 's/respawn/#respawn/g' {}
10.04 LTS (Lucid) VPS
To run a 10.04 VPS (VE in OpenVZ-speech) you need to make serveral adjustments inside the VPS to make it boot. The steps are outlined at http://blog.bodhizazen.net/linux/ubuntu-10-04-openvz-templates/ .
Administration
When we create a VPS, we must give it a number. This number must be unique and it is used to control the VPS during it's existence. A good guideline is to use the last three digits of the ip address you are going to use for this VPS. i.e.: 10.0.0.101 would be VPS 101!
Creating a container from OS template
- Create a container
[HW] $ sudo vzctl create <VEID> --ostemplate <the name of your template>
- Set the IP, nameserver, hostname and start the container as described below
- Enter into the container (equivalent to chroot)
[HW] $ sudo vzctl enter [VEID]
Install the langauge support. language-pack-[LANGUAGE]-base, for english use language-pack-en-base
[VPS] $ apt-get install language-pack-en-base
- You might need to run apt-get update first
- Set timezone
[VPS] $ dpkg-reconfigure tzdata
- Exit the container
[VPS] $ exit
Configuring a container
- Adding IP address
[HW] $ sudo vzctl set [VEID|VENAME] --ipadd [IP_ADDRESS] --save
- Deleting IP address
[HW] $ sudo vzctl set [VEID|VENAME] --ipdel [IP_ADDRESS] --save
- Setting hostname
[HW] $ sudo vzctl set [VEID|VENAME] --hostname [HOSTNAME] --save
- Setting nameserver
[HW] $ sudo vzctl set [VEID|VENAME] --nameserver [NAMESERVER_IP] --save
- Setting virtual name
[HW] $ sudo vzctl set [VEID] --name [VENAME] --save
Start, stop, take snapshot or revert to snapshot
- Start
[HW] $ sudo vzctl start [VEID|VENAME]
Important! As of 2008-06-04 there is a bug in the linux-image-2.6.24-18-openvz and earlier kernels. It prevents the network settings from being copied into the VE, and you cannot use the cp and mv command inside your VE.
- Stop
[HW] $ sudo vzctl stop [VEID|VENAME]
- Take snapshot
[HW] $ sudo vzctl chkpnt [VEID|VENAME] [--dumpfile <name>]
- Revert to snapshot
[HW] $ sudo vzctl restore [VEID|VENAME] [--dumpfile <name>]
Destroying a container
[HW] $ sudo vzctl destroy [VEID|VENAME]
Monitoring
- List running VPS
[HW] $ sudo vzlist
- List all VPS
[HW] $ sudo vzlist -a
Networking
- Networking nodes using NAT or Bridge
Networking, IPv6 with venet0 device
- Set the NET_ADMIN:on capability.
[HW] $ sudo vzctl set VPSID --capability net_admin:on
Edit the /etc/vz/dists/scripts/debian-add_ip.sh script. Add the up route --inet6 add ::/0 venet0 line under the venet0 IPv6 configuration.
iface venet0 inet6 static address ::1 netmask 128 up route --inet6 add ::/0 venet0
Change the proxy_ndp and the ipv6 forwarding state to 1.
[HW] $ sudo echo "1" > /proc/sys/net/ipv6/conf/eth0/proxy_ndp [HW] $ sudo echo "1" > /proc/sys/net/ipv6/conf/eth0/forwarding [HW] $ sudo echo "1" > /proc/sys/net/ipv6/conf/venet0/forwarding
Change the sysctl variables in /etc/sysctl.conf
net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.all.forwarding = 1 net.ipv6.conf.eth0.proxy_ndp = 1
- Add the IPv6 address for the virtual node.
[HW] $ sudo vzctl set VEID --ipadd fc00::01 --save
- Restart the virtual node.
[HW] $ sudo vzctl restart VEID
- Test Your configuration.
[HW] $ sudo vzctl enter VEID [VPS] $ ip addr 3: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/void inet 127.0.0.1/32 scope host inet 123.45.67.89/32 scope global inet6 ::1/128 scope host inet6 fc00::1/128 scope global [VPS] $ ping6 -n www.6bone.net PING www.6bone.net(2001:5c0:0:2::24) 56 data bytes 64 bytes from 2001:5c0:0:2::24: icmp_seq=1 ttl=52 time=203 ms
See also