Docker

1237 readers
1 users here now

founded 2 years ago
MODERATORS
1
2
 
 

I want to be sure the torrent traffic of my transmission docker instance go through my VPN.

I got different interfaces with different vlans on the host. I want to be sure the container created with docker compose use only a specific interface. The interface with the correct vlan has IP 192.168.90.92

I have tested the host connectivity with: curl --interface ethX https://api.ipify.org/ and it's working fine, meaning that public ips are different.

I have tried with the following on the docker compose file:

ports: - 9091:9091 # Web UI port - 192.168.90.92:51413:51413 # Torrent port (TCP) - 192.168.90.92:51413:51413/udp # Torrent port (UDP)

However, the traffic is still coming from the default gateway.

Any idea?

Thanks!

3
 
 

Over the week I've been dealing with the Kinsing virus via Docker on my VPS. I've been learning about it and I've come to find I've been thinking about Docker all wrong with the way that I was using it.

I enjoy using Portainer, so that's a must for me. I know Docker allows you to secure Docker sockets via context; docker context create vps --docker "host=ssh://user@vps".

I would like to use this method, via Portainer (locally) to connect to docker (remote) via SSH. Anyone know of a way to do this? I've been looking around and haven't found much.

4
5
 
 

I recently asked the best way to run my Lemmy bot on my Synology NAS and most people suggested Docker.

I'm currently trying to get it running on my machine in Docker before transferring it over there, but am running into trouble.

Currently, to run locally, I navigate to the folder and type npm start. That executes tsx src/main.ts.

The first thing main.ts does is access argv to detect if a third argument was given, dev, and if it was, it loads in .env.development, otherwise it loads .env, containing environment variables. It puts those variables into a local variable that I then pass around in the bot. I am definitely not tied to this approach if there is a better practice way of doing it.

opening lines of main.ts

import { config } from 'dotenv';

let path: string;

const env = process.argv[2];
if (env && env === 'dev') {
    path = '.env.development';
} else {
    path = '.env';
}

config({
    override: true,
    path
});

const {
    ENVIROMENT_VARIABLE_1
} = process.env as Record<string, string>;

Ideally, I would like a way that I can create a Docker image and then run it with either the .env.development variables or the .env ones...maybe even a completely separate one I decide to create after-the-fact.

Right now, I can't even run it. When I type docker-compose up I get npm start: not found.

My Dockerfile

FROM node:22
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
USER node
COPY . .
CMD "npm start"

My compose.yaml

services:
  node:
    build: .
    image: an-image-name:latest
    environment:
      - ENVIROMENT_VARIABLE_1 = ${ENVIROMENT_VARIABLE_1}

I assume the current problem is something to do with where stuff is being copied to and what the workdir is, but don't know precisely how to address it.

And once that's resolved, I have even less idea how to go about passing through the environment variables.

Any help would be much appreciated.

6
 
 

Hi guys, I have no problem running docker (containers) via CLI, but I though it would be nice try Docker Desktop on my Ubuntu machine. But as soon as I start Docker Desktop it runs "starting Docker engine" indefinitely until my drive is full. The .docker folder then is about 70GB large. I read somewhere that this is the virtual disk size that is being created and that I could change it in the settings, but those are blocked until the engine starting process is finished (which it never does). Anyone else has experienced this?

7
4
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
 
 

I installed ollama for using AI localy on my computer. And now i want to use OpenWebUI. That needs to be installed in docker, so i did that and it should host a page which is gui for openwebui. And its working but i have this problem: https://github.com/open-webui/open-webui/discussions/4376

So i pasted this command as they say:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

But after that it returned this error code: docker: Error response from daemon: Conflict. The container name "/open-webui" is already in use by container "1cbc8ac3b80f2a6921778964f94eff32541a4540ee6ab5d3335427a0fc8366a8". You have to remove (or rename) that container to be able to reuse that name. See 'docker run --help'.

Can anyone help me with this?

8
 
 

Hi everyone !

Intro

Was a long ride since 3 years ago I started my first docker container. Learned a lot from how to build my custom image with a Dockerfile, loading my own configurations files into the container, getting along with docker-compose, traefik and YAML syntax... and and and !

However while tinkering with vaultwarden's config and changing to postgresSQL there's something that's really bugging me...

Questions


  • How do you/devs choose which database to use for your/their application? Are there any specific things to take into account before choosing one over another?

  • Does consistency in database containers makes sense? I mean, changing all my containers to ONLY postgres (or mariaDB whatever)?

  • Does it make sense to update the database image regularly? Or is the application bound to a specific version and will break after any update?

  • Can I switch between one over another even if you/devs choose to use e.g. MariaDB ? Or is it baked/hardcoded into the application image and switching to another database requires extra programming skills?

Maybe not directly related to databases but that one is also bugging me for some time now:

  • What's redis role into all of this? I can't the hell of me understand what is does and how it's linked between the application and database. I know it's supposed to give faster access to resources, but If I remember correctly, while playing around with Nextcloud, the redis container logs were dead silent, It seemed very "useless" or not active from my perspective. I'm always wondering "Humm redis... what are you doing here?".

Thanks :)

9
 
 

cross-posted from: https://lazysoci.al/post/15099881

A surprise Docker update!

10
 
 

I am running Fedora Server with Docker installed, and it has a folder that connects to my NAS via SMB. I will have all of my Docker files (and Compose configs) stored on my NAS, since it has a lot more storage. I am worried that Docker will glitch out and cause a mess, since my NAS starts ~2 minutes later than my server from a reboot. Is there something that I can do to make sure Docker is able to connect to the SMB share safely?

11
 
 

cross-posted from: https://lazysoci.al/post/14145485

There's a service that I want to use, however for reasons, it no longer has any builds available. Consequently, I am thinking of building it myself. How does one go about doing that and then afterwards, how do I get it up on Docker hub? Can I just create an account and upload?

12
0
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

I am looking for something that can take a Dockerfile, like the following as an input:


FROM --platform=linux/amd64 debian:latest
ENV DEBIAN_FRONTEND=noninteractive

RUN apt update &amp;&amp; apt install -y curl unzip libsecret-1-0 jq
COPY entrypoint.sh .
ENTRYPOINT [ "/entrypoint.sh" ]

And produce a a multi-stage Dockerfile where the last stage is built from scratch, with the dependencies for the script in the ENTRYPOINT (or CMD) copied over, like this:


FROM --platform=linux/amd64 debian:latest as builder
ENV DEBIAN_FRONTEND=noninteractive

RUN apt update &amp;&amp; apt install -y curl unzip libsecret-1-0 jq

FROM --platform=linux/amd64 scratch as app
SHELL ["/bin/bash"]

# the binaries executed in entrypoint.sh
COPY --from=builder /bin/bash /bin/bash
COPY --from=builder /usr/bin/curl /usr/bin/curl
COPY --from=builder /usr/bin/jq /usr/bin/jq
COPY --from=builder /usr/bin/sleep /usr/bin/sleep

# shared libraries of the binaries
COPY --from=builder /lib/x86_64-linux-gnu/libjq.so.1 /lib/x86_64-linux-gnu/libjq.so.1
COPY --from=builder /lib/x86_64-linux-gnu/libcurl.so.4 /lib/x86_64-linux-gnu/libcurl.so.4
COPY --from=builder /lib/x86_64-linux-gnu/libz.so.1 /lib/x86_64-linux-gnu/libz.so.1
# ...a bunch of other shared libs...

# entrypoint
COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT [ "/entrypoint.sh" ]

I've had pretty decent success creating images like this manually (using ldd to find the dependencies) based on this blog. To my knowledge, there's nothing out there that automates producing an image built from scratch, specifically. If something like this doesn't exist, I'm willing to build it myself.

13
 
 

I know this seem like a typical "Your opinion man" post but honestly, is there anything really useful beyond hosting an adblocker? (read: complementary, not found elsewhere, that enhances own's personal needs and/or life as a whole). And yes, I'm aware of what can be hosted -- picture editors, video editors, webtops, dhcpcd, emulators, and so on.

But the question is -- why I'd need all that if these stuff can be found on any ordinary PC out there? Even phones. Hell, even on a smart tv. That is like "trying to reinvent the wheel" for no necessary purpose other than "to look cool". There's also "because its fun", but is it really fun doing pretty much the same thing over and over again? There isn't a "learning gap" between these hosting options -- all of em have (pretty much) the same procedure to get things running.

With that said... I've been trying really, REALLY hard to host more stuff but I can't go beyond hosting (only) an adblocker.

14
 
 

I'm very new to Docker and Linux in general. My goal was to make my own server mainly for Plex. Now that I've got that running with the help of Dockstarter, I'm looking to branch out and I want to make sure my system is secure. I'm also running Ubuntu 'cause I for sure couldn't get this far with Terminal alone.

I use Private Internet Access as my VPN and I have it installed on my desktop environment. I've also been able to reroute my qBittorrent in a container through another container with Gluetun.

My prior setup is a Windows machine with PIA, kill switch enabled, qBit assigned to PIA adapter only.

So my question: What is more secure, PIA running on Ubuntu with a kill switch or tunneling each container through Gluetun?

I would like it to mirror my Windows setup but I couldn't figure out the network adapter situation with qBit.