simon

joined 1 year ago
[–] simon@slrpnk.net 2 points 3 weeks ago (1 children)

Is there any clarity about what the future with chat control will look like? As in what exactly apps will need to implement.

This part about self evaluation confuses me:

Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to "implement mitigating measures to counter that risk," the Council notes.

I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.

Does this "mitigating measure" in practice mean sending a hash of each image sent through the messenger to some service built by Google or Apple for comparison against known CSAM? Since building a database of hashes to compare with is only realistically possible for the largest corporations. Or would the actual image itself have to leave the device, since it could be argued that some remote AI could identify any CSAM, even if it is not yet in any database? Perhaps some locally running AI model could do a decent enough job, so that nothing has to leave the device during the evaluation stage.

But then again, there will always be false positives, where an innocent person's image would be uploaded to... the service provider (like Signal) for review? So you could never be sure that your communication stays private, since the risk of false positives is always there. Regardless of what the solution is, the user will have to give up fully owning there device, since this chat control service can always decide to take control of your device to upload your communication somewhere.

[–] simon@slrpnk.net 0 points 4 months ago

Is this any different from getting a sharing link from other chatbots, Google Docs or anywhere else? Seems like expected behavior. Or are the others not indexable by search engine for some reason?

[–] simon@slrpnk.net 2 points 7 months ago

Ooh, I don't know why I assumed that XD

Makes Obsidian way more interesting.

[–] simon@slrpnk.net 3 points 7 months ago (2 children)

Wiki.vim https://github.com/lervag/wiki.vim

It let's you create a wiki with links between pages.

Unlike obsidian, it doesn't put your personal data in the cloud.

Unlike the similarly named vimwiki it doesn't use a custom file format. It uses markdown. Although I think you can configure vimwiki to use markdown as well, but with reduced functionality of the plugin.

[–] simon@slrpnk.net 2 points 10 months ago (4 children)

I haven't found something that has good support for swiping words. Anysoft Keyboard gets too many words wrong.

If I thought Google was actually collecting what I type, I would put up with typing manually on another keyboard. But that kind of data collection without consent is illegal in the EU. I'd put the risk of Google breaking the law here at less than 10%, which is tolerable.

[–] simon@slrpnk.net 3 points 10 months ago

Seems I was confused about it being Gboard. It's one of those suggestion lines that pop up over the keyboard, but apparently it's a separate service.

[–] simon@slrpnk.net 6 points 10 months ago (1 children)

Thank you! This must be it.

I don't have message content in my notifications, so I don't think it can read any data from there. Maybe Signal just supplies it to this service directly?

 

Android's Gboard always suggests replies in chat apps that fit the context of what my contacts write.

If my previous message had been related, I would assume it predicted what my contact would say in response and make a suggestion based on that. But even if the contact changes the topic, the suggestions are appropriate.

I don't expect that the apps all share the conversation with Gboard. So how are the predictions made.

It seems unlikely that it would take screenshots and base predictions on that. But otherwise I don't know how it is possible.

[–] simon@slrpnk.net 2 points 11 months ago

Cool! How do you make these?

[–] simon@slrpnk.net 2 points 1 year ago (1 children)

How does ability to detect more faces relate to mental health? It doesn't seem like something negative.

[–] simon@slrpnk.net 2 points 1 year ago

Thanks for sharing. I can definitely see how life can be better in a richer and more progressive place. I guess a major factor for choosing where to live should be whether people there are hopeful for the future.

[–] simon@slrpnk.net 1 points 1 year ago (6 children)

Just curious, why do you prefer those countries over Japan? Anything lacking there?

[–] simon@slrpnk.net 8 points 1 year ago (1 children)

If I try to do the threat modeling, I guess I'm seeing three levels:

  1. Intelligence agencies. They probably have access to all possible data about you. Don't make them your enemy. Hopefully they never turn evil in your country.
  2. Large technology companies. They make the infrastructure like phone operating systems, stuff that you can't get around on the modern internet like Cloudflare, etc. They can be affected a little bit with legislation like the GDPR but only to a matter of degrees. But at least they have reasonably good security so you don't fully lose control of your data. The worst thing they will do to you is to try to convince you to buy stuff, which isn't all that bad.
  3. Smaller or non -tech companies that just are not competent enough to keep your data secure. They will use dependencies that spy on you, like Google Analytics or android app creation frameworks that inject location tracking. An online pharmacy that is using Facebook scripts and thus shares all your medical purchases with Facebook or elsewhere. A lot of this would be illegal but it is hard to find out and enforce the law about, and it's like a whack a mole game. It's hard to know where your data goes and it is probably being sold to whoever wants to pay. For example, local police buying location data from data brokers (worth double checking but I think this actually happens). Since there is no limit to who can access the data, this is more worrying. But for these things, you kind of have the big tech companies on your side. Browsers and phones tend to have built in tracker blocking these days. And you yourself can choose to be careful about what software you run from this category.

My point is that we should be clear about why we are concerned about the future. Who is the threat and how could they use your data against you? Breaking it down and pointing to a clear harm will help people around you understand why you are concerned.

view more: next ›