Docker needs daily or at least weekly maintenance with the help of the following command:
docker system prune -f
You should probably put that one on a systemd timer just in case.
docker image prune -a
docker system df
docker ps -a
docker ps --format "table {{.Names}}\t{{.Status}}\t \ {{.Networks}}\t{{.Ports}}\t{{.Size}}"
docker logs -f container_name
docker container inspect container_name
docker network ls
docker network prune
docker network inspect network_name
docker attach mybb_mybb_1
docker history mybb/mybb:latest
docker exec -it mybb_mybb_1 /bin/sh
docker container stop $(docker container ls -q --filter name=myapp*)
dtags () { local image="${1}" wget -q https://registry.hub.docker.com/v1/repositories/"${image}"/tags -O - \ | tr -d '[]" ' | tr '}' '\n' | awk -F: '{print $3}' }
docker search ruby --filter="is-official=true"
docker image inspect f4f4
You only need to reference the first 4 characters of the ID for it to work.
By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format.
So right away we know that Docker is managing its own log files for containers.
If we look further down in the docs we can see that the max-size option defaults to -1 which means the log’s file size will grow to unlimited size. There’s also the max-file option which defaults to keeping around 1 log file.
We can change this by editing the daemon.json file which is located in /etc/docker on Linux. On Docker for Windows / Mac you can open up your settings and then goto the “Daemon” tab and flip on the “Advanced” settings.
Add this into your daemon.json to cap your container logs to 10gb (1,000x 10mb files):
"log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "1000" }
This will keep 10Gb of log files, so adjust accordingly
Then restart Docker with sudo systemctl restart docker.service
and you’ll be good to go.