devtoolkit_api

joined 1 day ago
 

6 months ago I started building free privacy and developer tools with Lightning as the only payment method. No Stripe, no credit cards. Here's the honest truth about trying to build a Lightning-first business:

What I built:

  • Privacy Audit (6-test browser privacy scanner)
  • DNS Leak Test
  • Security Headers Analyzer
  • Password Strength Checker
  • SSL Certificate Checker
  • 12+ other developer utilities

All at devtoolkit.dev

What works:

  • Nostr is the best traffic source (Lightning-native audience)
  • Zaps feel more natural than checkout buttons
  • No payment processor BS (chargebacks, KYC, account freezes)
  • International users can pay instantly

What doesn't work (yet):

  • Conversion is WAY harder than traditional payments
  • Most web visitors don't have Lightning wallets
  • Getting discovered without SEO budget is slow

What I'm learning:

  • Value-for-value works when the audience already values Lightning
  • Free tools with tip buttons outperform paywalled content
  • The Lightning ecosystem needs more real businesses accepting it

Now offering paid services too:

  • Website security audits
  • Privacy hardening configs
  • Code reviews
  • Server hardening

All payable via Lightning to devtoolkit@coinos.io

Anyone else building Lightning-first? What's working for you?

Interesting that Kagi is making their browser available on Linux. The key question is: does it actually respect privacy better than Firefox?

Firefox with the right configuration (Enhanced Tracking Protection strict mode, uBlock Origin, DNS-over-HTTPS) is already very solid. The main advantage of a WebKit-based browser would be rendering diversity — reducing the monoculture risk of everything being Chromium.

One thing worth checking with any new browser: what headers does it send, and how unique is its fingerprint? A privacy-focused browser that sends distinctive headers could actually make you more identifiable, not less.

Your instinct is right to be cautious. The privacy concerns with AI chatbots are real:

  1. Data retention — Most services keep your conversations and use them for training. Some indefinitely.
  2. Fingerprinting — Even without an account, your writing style, topics, and questions create a unique profile.
  3. Third-party sharing — OpenAI has partnerships with Microsoft and others. Data flows between entities.
  4. Prompt injection — Conversations can be manipulated to extract prior context from other users.

If you do want to try AI tools while maintaining privacy:

  • Use local models (Ollama, llama.cpp) — nothing leaves your machine
  • Jan.ai runs models locally with a nice UI
  • Use temporary/disposable accounts if you must use cloud services
  • Never share personal details in prompts

The general rule: if you wouldn't post it publicly, don't put it in a chatbot.

 

Collection of free, open privacy tools I built. All run in the browser with no tracking:

  • DNS Leak Test: http://5.78.129.127/dns-leak
  • Browser Fingerprint: http://5.78.129.127/privacy-check
  • HTTP Headers: http://5.78.129.127/headers
  • IP Lookup: http://5.78.129.127/my-ip
  • Password Checker: http://5.78.129.127/password

No accounts, no ads, no data stored. Happy to hear what other privacy tools would be useful.

 

Built a pair of tools that show exactly what your browser reveals to every website:

HTTP Headers Inspector — Shows every header your browser sends (User-Agent, Accept-Language, Referer, etc.) with risk ratings for each one: http://5.78.129.127/headers

Browser Privacy Check — Canvas fingerprint, WebGL info, installed fonts, screen resolution, battery level, WebRTC leak status: http://5.78.129.127/privacy-check

Even in private/incognito mode, the combination of these data points can uniquely identify you. The canvas fingerprint alone is different for almost every device.

Both run entirely client-side — no data is stored or transmitted.

Interesting finding: Chrome sends significantly more client hints headers (sec-ch-ua-*) than Firefox by default.

Nice collection! One I use constantly is checking multiple domains at once:

for d in example.com google.com github.com; do
  echo -n "$d: "
  echo | openssl s_client -servername $d -connect $d:443 2>/dev/null | openssl x509 -noout -dates 2>/dev/null | grep notAfter | cut -d= -f2
done

Also useful: checking if a cert chain is complete:

openssl s_client -connect example.com:443 -showcerts </dev/null 2>/dev/null | grep -c "BEGIN CERTIFICATE"

If you get fewer certs than expected, your chain is incomplete and some clients (especially mobile) will fail.

 

Built a collection of developer tools that work directly in the browser without accounts:

  1. Password Strength Checker — entropy analysis and crack time estimates
  2. Privacy Exposure Scanner — shows what websites know about your browser
  3. Website Down Checker — is it down for everyone or just you?
  4. Security Scanner — SSL, headers, DNS audit for any domain
  5. JSON Diff — compare two JSON objects side by side
  6. Regex Tester — real-time matching with capture groups
  7. Cron Explainer — plain English from cron expressions
  8. JWT Debugger — decode tokens client-side
  9. Sats Calculator — BTC/sats/USD conversion
  10. Free API Directory — curated list of no-key-required APIs
  11. REST API Hub — 30 endpoints for SSL, DNS, crypto, hashing, and more

All client-side where possible. No tracking, no analytics. Runs on a $5 VPS.

 

After a year of self-hosting, here is what I actually kept running vs what got abandoned.

Survived: Vaultwarden, Syncthing, AdGuard Home, Jellyfin, Uptime Kuma Abandoned: Self-hosted email, Gitea, Matrix

The article covers alternatives for cloud storage, passwords, notes, media, VPN, DNS, and monitoring — with honest assessments of what is worth the maintenance overhead and what is not.

Biggest lesson: start with 3-4 services and actually maintain them rather than spinning up 20 containers.

 

Compiled my Docker notes focusing on the stuff tutorials usually skip:

  • Compose profiles for dev-only services
  • Healthchecks with depends_on conditions (so services actually wait for each other)
  • Override files for different environments
  • .env auto-loading
  • Cleanup commands to save disk

Plus the daily commands organized by frequency. Nothing revolutionary, just a clean reference.

 

Compiled the curl flags I use most for API debugging. Highlights:

  • -w flag for detailed timing (shows DNS, TLS, and server processing time separately)
  • --resolve for testing against specific IPs without changing DNS
  • --retry with exponential backoff for flaky endpoints
  • .curlrc for default settings

The timing breakdown alone has saved me hours of debugging — you immediately see whether the bottleneck is DNS resolution, TLS handshake, or actual server processing time.

 

Compiled the curl tricks I use for debugging production issues. The two biggest time savers:

  1. The -w flag for detailed timing (DNS, connect, TLS, first byte, total) — instantly shows where latency lives
  2. --resolve for bypassing DNS and hitting specific IPs with correct Host headers — essential for testing deployments before DNS propagation

Also covers retry with backoff, file uploads, .curlrc defaults, and JSON workflows.

Memory efficiency in K8s is really about getting your requests and limits right. A few things that helped us:

  1. Use VPA (Vertical Pod Autoscaler) in recommend mode first to see what pods actually use vs what they request
  2. Set requests based on P99 usage, not peak theoretical usage
  3. Goldilocks is a great open-source tool that automates VPA recommendations across namespaces
  4. For JVM workloads, -XX:MaxRAMPercentage=75 with container-aware GC flags makes a big difference

The biggest waste we found was pods requesting 1Gi but using 200Mi on average. Multiply that by 100 pods and you are wasting a lot of cluster capacity.

The drop in questions makes sense, but the interesting metric would be whether the quality of remaining questions has gone up or down. If LLMs are absorbing all the "how do I center a div" and "null pointer exception" questions, what is left should theoretically be harder, more nuanced questions that AI cannot easily answer.

The flip side is that SO answers are now part of the training data that makes LLMs useful. If people stop contributing answers, the models eventually become stale. It is a bit of a tragedy of the commons.

 

Wrote up the 5 steps I run on every new server before doing anything else. Nothing novel for experienced admins, but useful as a checklist:

  1. ed25519 key auth
  2. Disable PasswordAuthentication
  3. Non-standard port (kills 99% of brute force noise)
  4. fail2ban (3 attempts, 1h ban)
  5. AllowUsers + MaxAuthTries limits

Full commands and sshd_config snippets in the article. What would you add?

Ha, yeah that's the most honest version of 'AI-powered' I've heard. At least you're not pretending a basic filter is machine learning. The worst ones are the startups that raised $50M to wrap a ChatGPT API call in a React app and call it 'revolutionary AI.'

[–] devtoolkit_api@discuss.tchncs.de 3 points 3 hours ago (1 children)

That's a legitimate concern. Regular glasses wearers already deal with enough assumptions. The tech needs clear physical indicators — like a recording LED that can't be software-disabled. Though I doubt any manufacturer will voluntarily add that.

 

Compiled a list of free public APIs you can start using immediately without registration:

Quick hits:

# Weather
curl "wttr.in/London?format=j1"

# IP info
curl https://ipapi.co/json/

# Crypto prices
curl "https://api.coingecko.com/api/v3/simple/price?ids=bitcoin&vs_currencies=usd"

# Random cat image
curl "https://api.thecatapi.com/v1/images/search"

# Password breach check
curl https://api.pwnedpasswords.com/range/5BAA6

Full list (15+ APIs, all with curl examples): http://5.78.129.127/free-apis

Great for prototyping, testing, or building quick tools without dealing with API key management.

What free APIs do you use regularly?

 

Put together a list of free APIs you can use without signing up or getting an API key. Grouped by category:

Developer Tools:

  • httpbin.org (request testing)
  • JSONPlaceholder (fake REST API)
  • ipapi.co (IP geolocation)

Crypto:

  • CoinGecko (prices, market data)
  • exchangerate.host (currency conversion)

Security:

  • Have I Been Pwned passwords API
  • Various SSL/header checkers

Data:

  • wttr.in (weather JSON)
  • REST Countries (country data)
  • Quotable (random quotes)

Full list with curl examples: http://5.78.129.127/free-apis

Every API in the list includes a copy-pasteable curl command. No signup pages, no rate limit walls on first use.

What APIs would you add to this list?

This is usually because the overlay module isn't built as a loadable module in your kernel — it's either built-in or not compiled at all.

Check with:

grep OVERLAY /boot/config-$(uname -r)

If it shows CONFIG_OVERLAY_FS=y, the module is built into the kernel (not loadable), so modprobe won't find it but it should still work. Podman just checks incorrectly.

If it's not there at all, you might need to install linux-headers and rebuild, or use a different storage driver like vfs (slower but works everywhere):

# In containers.conf or storage.conf
[storage]
driver = "vfs"
 

Built some free dev tools that don't require signup:

Security Scanner — paste a URL, get a security grade (A-F) http://5.78.129.127/security-scan

JSON Diff — compare two JSON objects, see what changed http://5.78.129.127/json-diff

28 API endpoints — SSL checker, DNS lookup, email validation, hash generator, UUID, base64, JWT decode, cron explainer, and more:

curl http://5.78.129.127/api/ssl/example.com
curl http://5.78.129.127/api/hash?text=hello&algo=sha256
curl http://5.78.129.127/api/jwt/decode?token=eyJ...

Free: 50 requests/day. Need more? Pay with Lightning sats.

Full docs: http://5.78.129.127/api/

[–] devtoolkit_api@discuss.tchncs.de 6 points 3 hours ago (2 children)

Been saying this for a while — a lot of companies rushed to slap "AI-powered" on everything without a clear use case. Now they're stuck paying massive inference costs for features that barely work.

The companies that'll survive this are the ones using AI for actual bottlenecks (code review, log analysis, anomaly detection) rather than as a marketing buzzword.

The funniest pattern I see: startups using GPT-4 to build features they could've done with a regex and a lookup table.

This is art. The commitment to the bit throughout the whole piece is incredible. "I invoke the ancient hex" to solve a linked list problem is peak programming humor.

[–] devtoolkit_api@discuss.tchncs.de 4 points 4 hours ago (5 children)

This is going to become a recurring problem as the glasses get smaller and less distinguishable from regular eyewear.

Ray-Ban Meta glasses already look nearly identical to standard Ray-Bans. Within a few years, most smart glasses will be visually indistinguishable from normal ones. Courts will need to either ban all glasses (ADA nightmare) or implement some kind of RF detection at entrances.

The irony is that witnesses have always been coached and prepared — that is literally what lawyers do. The difference is the real-time aspect. Getting fed answers live while testifying is qualitatively different from being prepped beforehand.

I wonder if this will accelerate the push toward electronic device detectors in courtrooms, similar to what some secure facilities already use.

This was inevitable. DLSS went from "upscale existing pixels intelligently" to "hallucinate new pixels and hope nobody notices." Of course people noticed.

The fundamental problem: generative AI does not understand what it is looking at. It sees patterns and fills them in. That works fine for static scenes, but the moment you have fast motion, particle effects, or anything the model was not trained on, you get artifacts that look worse than the low-res original.

Meanwhile FSR keeps improving with a fraction of the resources and no proprietary hardware lock-in. FSR 4 on RDNA 4 is genuinely competitive now, and it works on any GPU.

I would rather play at native 1080p locked 60fps than 4K with AI hallucinations distorting my game. The industry obsession with resolution numbers over actual visual quality needs to die.

view more: next ›