In this post I will leave here a piece of code in Python, which will be useful soon, like copying any file to a container that is running.
The Docker module’s put_archive function helps with this task, but it has a particularity: it only uploads, whether files or folders, in tar format, and automatically unzips it in the destination folder. The tar is not located in the destination folder, but needs to be mounted locally first.
In a normal procedure, you have to load this file (open method) from somewhere into a variable and use the contents of this variable to upload: https://docker-py.readthedocs.io/en/stable/containers.html #docker.models.containers.Container.put_archive
One option is to mount the tar in memory, using the IO module and use the return from this Stream to upload.
In the example below, I want to upload the file “devops-db.info“, which is local, to a folder “/tmp/” in the container “srv-jenkins-01” which is on Docker server 172.21.5.70.
import docker
import tarfile
import io
import os
str_Docker_URI = 'tcp://172.21.5.70:2375'
str_Container_Name = 'srv-jenkins-01'
str_Source_File = '/Users/fausto.branco/OneDrive/Work/Python/Bind9/devops-db.info'
str_Destination_Path = '/tmp/'
obj_DockerClient = docker.DockerClient(base_url=str_Docker_URI)
obj_Container = obj_DockerClient.containers.get(str_Container_Name)
obj_stream = io.BytesIO()
with tarfile.open(fileobj=obj_stream, mode='w|') as tmp_tar, open(str_Source_File, 'rb') as tmp_file:
obj_info = tmp_tar.gettarinfo(fileobj=tmp_file)
obj_info.name = os.path.basename(str_Source_File)
tmp_tar.addfile(obj_info, tmp_file)
obj_Container.put_archive(str_Destination_Path, obj_stream.getvalue())
Right after execution, I will validate within the Container:
root@b31bd131e02f:/tmp# ls -lah
total 30M
drwxrwxrwt 1 root root 4.0K May 7 12:31 .
drwxr-xr-x 1 root root 4.0K May 7 12:31 ..
-rw-r--r-- 1 502 dialout 517 Apr 29 15:43 devops-db.info