MindfulMaverick

joined 4 months ago
 

I'm trying to find a place where you can ask broader development questions, not just specific error messages.

StackOverflow and Codidact are way too restrictive, if your question isn't a precise technical issue with a reproducible example, it gets shut down immediately. Reddit and Lemmy seem more focused on news and memes; actual questions and discussions tend to just sink without engagement. And honestly, the kind of specific error-driven questions StackOverflow excels at are things AI can solve instantly now.

What I'm really looking for is a community (forum, Discord, whatever) where you can get help on broader topics related to software engineering.

Does anything like this still exist? Somewhere with actual humans willing to discuss the process of building software, not just fix syntax?

[–] MindfulMaverick@piefed.zip 1 points 1 week ago

I'm trying to download all fics from that specific forum. Sorry I wasn't clear.

[–] MindfulMaverick@piefed.zip 1 points 1 week ago

If I used sqlite or any other SQL database I don't think users could collaborate on building the database, so I was thinking of json files committed to a git repository online.

 

Want to backup your favorite NSFW Creative Writing stories? Here's the complete workflow:

Step 1: Gather Links

  • Start with the comprehensive link collection from Cyb3rNexus's GitHub Gist – it already contains hundreds of pre-filtered thread links!
  • For more recent stories, navigate to NSFW Creative Writing
  • Use Link Gopher browser extension to extract all links from paginated results
  • Combine both sources and save all links into qq_links.txt

Step 2: Filter Thread Links
Create filter_qq_links.py with the Python script above, then run:

python filter_qq_links.py  

This extracts only valid thread links (matching /threads/ pattern) into filtered_links.txt

Step 3: Download with Fichub

# Install fichub-cli  
pip install -U fichub-cli  

# Download all fics  
fichub_cli -i filtered_links.txt -o "/home/user/downloads/QQ"  

That's it! All stories will be downloaded as EPUB files to your specified folder. Happy reading!

#!/usr/bin/env python3  
"""  
Simple version - filter QuestionableQuesting thread links  
"""  

import re  


def filter_qq_links(input_file, output_file):  
    """Filter QQ thread links from input file and save to output file."""  
    pattern = r"^https://forum//.questionablequesting/.com/threads/[^/]+/d+/?$"  

    with open(input_file, "r") as f:  
        links = [line.strip() for line in f if line.strip()]  

    valid_links = [link for link in links if re.match(pattern, link)]  

    with open(output_file, "w") as f:  
        f.write("\n".join(valid_links))  

    print(f"Found {len(valid_links)} valid links out of {len(links)} total")  
    print(f"Results saved to {output_file}")  


# Usage  
if __name__ == "__main__":  
    filter_qq_links("qq_links.txt", "filtered_links.txt")  
 

I'm looking for advice on building a collaborative caching system for APIs with strict rate limits that automatically commits updates to Git, allowing multiple users to share the scraping load and reduce server strain. The idea is to maintain a local dataset where each piece of data has a timestamp, and when anyone runs the script, it only fetches records older than a configurable threshold from the API, while serving everything else from the local cache. After fetching new data, the script would automatically commit changes to a shared Git repository, so subsequent users benefit from the updated cache without hitting the server. This way, the same task that would take days for one person could be completed in seconds by the next. Has anyone built something like this or know of existing tools/frameworks that support automated Git commits for collaborative data collection with timestamp-based incremental updates?

[–] MindfulMaverick@piefed.zip 35 points 1 month ago (5 children)

-23°C for civilized people

[–] MindfulMaverick@piefed.zip -1 points 2 months ago (4 children)

I know you made that up and you have no idea of what you are talking about because you included Russia there.