Flmaker

joined 1 month ago
[–] [email protected] 2 points 1 day ago

Thanks for sharing the link! I wouldn't have known about it otherwise.

[–] [email protected] -2 points 2 days ago (3 children)

I consider myself an open-source user, but I struggle to understand why I should trust these projects when I lack the technical knowledge to evaluate the underlying code, which is frequently updated. I am skeptical about the enthusiasm surrounding open-source software, especially since it is practically impossible for an independent auditor to verify every update.

This raises the question of why we should place our trust in these systems.

Then through intensive search and I have found similar doubts in many online communities including the one you have mentioned

I feel compelled to raise this issue, as it may help me—and others—better understand the rationale behind the blind trust placed in open-source software.

Additionally, I have noticed that open-source supporters often seem hesitant to address this dilemma. I wanted to bring this concern to the community here by sharing the opinions in other places and ask if I am the only one (or one of the very few) who harbors doubts.

This is why I believe it is a very important topic for me to share & interact with the members (who are more knowledgeable than I am) here which is my END GOAL for your specific question.

Meanwhile, I will continue using open-source applications as I seek out like-minded individuals who share my doubt and search for a further scrutiny .

[–] [email protected] 2 points 3 days ago* (last edited 3 days ago)

Take Open Source with a Grain of Salt: The Real Trust Dilemma

In the age of open-source software, there is a growing assumption that transparency inherently guarantees security and integrity. The belief is that anyone can check the code, find vulnerabilities, and fix them, making open-source projects safer and more reliable than their closed-source counterparts. However, this belief is often oversimplified, and it’s crucial to take open-source with a grain of salt. Here’s why.

The Trust Dilemma: Can We Really Trust Open Source Code?

There’s a famous story from the world of open-source development that highlights the complexity of this issue. Linus Torvalds, the creator of Linux, was once allegedly asked by the CIA to insert a backdoor into the Linux kernel. His response? He supposedly said, "No can do, too many eyes on the code." It seems like a reassuring statement, given the vast number of contributors to open-source projects, but it doesn’t fully account for the subtleties of how code can be manipulated.

Not long after Torvalds’ alleged interaction, a suspicious change was discovered in the Linux kernel—an "if" statement that wasn't comparing values but instead making an assignment to user ID 0 (root). This change wasn't a mistake; it was intentional, and yet it slipped through the cracks until it was discovered before going live. The question arises: who had the power to insert such a change into the code, bypassing standard review processes and security protocols? The answer remains elusive, and this event highlights a critical reality: even the open-source community isn’t immune to vulnerabilities, malicious actors, or hidden agendas.

Trusting the Maintainers

In the world of open-source, you ultimately have to trust the maintainers. While the system allows for community reviews, there’s no guarantee that every change is thoroughly vetted or that the maintainers themselves are vigilant and trustworthy. In fact, history has shown us that incidents like the XZ Utils supply-chain attack can go unnoticed for extended periods, even with a large user base. In the case of XZ, the malware was caught by accident, revealing a stark reality: while open-source software offers the potential for detection, it doesn’t guarantee comprehensive oversight.

It’s easy to forget that the very same trust issues apply to both open-source and closed-source software. Both models are prone to hidden vulnerabilities and backdoors, but in the case of open-source, there’s often an assumption that it’s inherently safer simply because it’s transparent. This assumption can lead users into a false sense of security, which can be just as dangerous as the opacity of closed-source systems.

The Challenge of Constant Auditing

Let’s be clear: open-source code isn’t guaranteed to be safe just because it’s open. Just as proprietary software can hide malicious code, so too can open-source. Consider how quickly vulnerabilities can slip through the cracks without active, ongoing auditing. When you’re dealing with software that’s updated frequently, like Signal or any other open-source project, it’s not enough to have a single audit—it needs to be audited constantly by developers with deep technical knowledge with every update

Here’s the catch: most users, particularly those lacking a deep understanding of coding, can’t assess the integrity of the software they’re using. Imagine someone without medical expertise trying to verify their doctor’s competence. It’s a similar situation in the tech world: unless you have the skills to inspect the code yourself, you’re relying on others to do so. In this case, the “others” are the project’s contributors, who might be few in number or lack the necessary resources for a comprehensive security audit.

Moreover, open-source projects don’t always have the manpower to conduct ongoing audits, and this becomes especially problematic with the shift toward software-as-a-service (SaaS). As more and more software shifts its critical functionality to the cloud, users lose direct control over the environment where the software runs. Even if the code is open-source, there’s no way to verify that the code running on the server matches the open code posted publicly.

The Reproducibility Issue

One of the most critical issues with open-source software lies in ensuring that the code you see matches the code you run. While reproducible builds are a step in the right direction, they only help ensure that the built binaries match the source code. But that doesn’t guarantee the source code itself hasn’t been altered. In fact, one of the lessons from the XZ Utils supply-chain attack is that the attack wasn’t in the code itself but in the build process. The attacker inserted a change into a build script, which was then used to generate the malicious binaries, all without altering the actual source code.

This highlights a crucial issue: even with open-source software, the integrity of the built artifacts—what you actually run on your machine—can’t always be guaranteed, and without constant scrutiny, this risk remains. It’s easy to assume that open-source software is free from these risks, but unless you’re carefully monitoring every update, you might be opening the door to hidden vulnerabilities.

A False Sense of Security

The allure of open-source software lies in its transparency, but transparency alone doesn’t ensure security. Much like closed-source software, open-source software can be compromised by malicious contributors, dependencies, or flaws that aren’t immediately visible. As the XZ incident demonstrated, even well-established open-source projects can be vulnerable if they lack active, engaged contributors who are constantly checking the code. Just because something is open-source doesn’t make it inherently secure.

Moreover, relying solely on the open-source nature of a project without understanding its review and maintenance processes is a risky approach. While many open-source projects have a strong track record of security, others are more vulnerable due to lack of scrutiny, poor contributor vetting, or simply not enough people actively reviewing the code. Trusting open-source code, therefore, requires more than just faith in its transparency—it demands a keen awareness of the process, contributors, and the ongoing review that goes into each update.

Conclusion: Take Open Source with a Grain of Salt

At the end of the day, the key takeaway is that just because software is open-source doesn’t mean it’s inherently safe. Whether it’s the potential for hidden backdoors, the inability to constantly audit every update, or the complexities of ensuring code integrity in production environments, there are many factors that can undermine the security of open-source projects. The fact is, no system—open or closed—is perfect, and both models come with their own set of risks.

So, take open source with a grain of salt. Recognize its potential, but don’t assume it’s free from flaws or vulnerabilities. Trusting open-source software requires a level of vigilance, scrutiny, and often, deep technical expertise. If you lack the resources or knowledge to properly vet code, it’s crucial to rely on established, well-maintained projects with a strong community of contributors. But remember, no matter how transparent the code may seem, the responsibility for verification often rests on individual users—and that’s a responsibility that’s not always feasible to bear.

In the world of software, the real question is not whether the code is open, but whether it’s actively maintained, thoroughly audited, and transparently reviewed

AFTER

EVERY

SINGLE

UPDATE.

Until we can guarantee that, open-source software should be used with caution, not blind trust.

[–] [email protected] 1 points 3 days ago

better the devil you know, I suppose

 

Trusting Open Source: Can We Really Verify the Code Behind the Updates?

In today's fast-paced digital landscape, open-source software has become a cornerstone of innovation and collaboration. However, as the FREQUENCY and COMPLEXITY of UPDATES increase, a pressing question arises: how can users—particularly those without extensive technical expertise—place their trust in the security and integrity of the code?

The premise of open source is that anyone can inspect the code, yet the reality is that very few individuals have the time, resources, or knowledge to conduct a thorough review of every update. This raises significant concerns about the actual vetting processes in place. What specific mechanisms or community practices are established to ensure that each update undergoes rigorous scrutiny? Are there standardized protocols for code review, and how are contributors held accountable for their changes?

Moreover, the sheer scale of many open-source projects complicates the review process. With numerous contributors and rapid iterations, how can we be confident that the review processes are not merely cursory but genuinely comprehensive and transparent? The potential for malicious actors to introduce vulnerabilities or backdoors into the codebase is a real threat that cannot be ignored. What concrete safeguards exist to detect and mitigate such risks before they reach end users?

Furthermore, the burden of verification often falls disproportionately on individual users, many of whom may lack the technical acumen to identify potential security flaws. This raises an essential question: how can the open-source community foster an environment of trust when the responsibility for code verification is placed on those who may not have the expertise to perform it effectively?

In light of these challenges, it is crucial for the open-source community to implement robust mechanisms for accountability, transparency, and user education. This includes fostering a culture of thorough code reviews, encouraging community engagement in the vetting process, and providing accessible resources for users to understand the software they rely on.

Ultimately, as we navigate the complexities of open-source software, we must confront the uncomfortable truth: without a reliable framework for verification, the trust we place in these systems may be misplaced. How can we ensure that the promise of open source is not undermined by the very vulnerabilities it seeks to eliminate?"

 

cross-posted from: https://lemmy.world/post/27344091

  1. Persistent Device Identifiers

My id is (1 digit changed to preserve my privacy):

38400000-8cf0-11bd-b23e-30b96e40000d

Android assigns Advertising IDs, unique identifiers that apps and advertisers use to track users across installations and account changes. Google explicitly states:

“The advertising ID is a unique, user-resettable ID for advertising, provided by Google Play services. It gives users better controls and provides developers with a simple, standard system to continue to monetize their apps.” Source: Google Android Developer Documentation

This ID allows apps to rebuild user profiles even after resets, enabling persistent tracking.

  1. Tracking via Cookies

Android’s web and app environments rely on cookies with unique identifiers. The W3C (web standards body) confirms:

“HTTP cookies are used to identify specific users and improve their web experience by storing session data, authentication, and tracking information.” Source: W3C HTTP State Management Mechanism https://www.w3.org/Protocols/rfc2109/rfc2109

Google’s Privacy Sandbox initiative further admits cookies are used for cross-site tracking:

“Third-party cookies have been a cornerstone of the web for decades… but they can also be used to track users across sites.” Source: Google Privacy Sandbox https://privacysandbox.com/intl/en_us/

  1. Ad-Driven Data Collection

Google’s ad platforms, like AdMob, collect behavioral data to refine targeting. The FTC found in a 2019 settlement:

“YouTube illegally harvested children’s data without parental consent, using it to target ads to minors.” Source: FTC Press Release https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-170-million-settlement-over-claims

A 2022 study by Aarhus University confirmed:

“87% of Android apps share data with third parties.” Source: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies https://dl.acm.org/doi/10.1145/3534593

  1. Device Fingerprinting

Android permits fingerprinting by allowing apps to access device metadata. The Electronic Frontier Foundation (EFF) warns:

“Even when users reset their Advertising ID, fingerprinting techniques combine static device attributes (e.g., OS version, hardware specs) to re-identify them.” Source: EFF Technical Analysis https://www.eff.org/deeplinks/2021/03/googles-floc-terrible-idea

  1. Hardware-Level Tracking

Google’s Titan M security chip, embedded in Pixel devices, operates independently of software controls. Researchers at Technische Universität Berlin noted:

“Hardware-level components like Titan M can execute processes that users cannot audit or disable, raising concerns about opaque data collection.” Source: TU Berlin Research Paper https://arxiv.org/abs/2105.14442

Regarding Titan M: Lots of its rsearch is being taken down. Very few are remaining online. This is one of them available today.

"In this paper, we provided the first study of the Titan M chip, recently introduced by Google in its Pixel smartphones. Despite being a key element in the security of these devices, no research is available on the subject and very little information is publicly available. We approached the target from different perspectives: we statically reverse-engineered the firmware, we audited the available libraries on the Android repositories, and we dynamically examined its memory layout by exploiting a known vulnerability. Then, we used the knowledge obtained through our study to design and implement a structure-aware black-box fuzzer, mutating valid Protobuf messages to automatically test the firmware. Leveraging our fuzzer, we identified several known vulnerabilities in a recent version of the firmware. Moreover, we discovered a 0-day vulnerability, which we responsibly disclosed to the vendor."

Ref: https://conand.me/publications/melotti-titanm-2021.pdf

  1. Notification Overload

A 2021 UC Berkeley study found:

“Android apps send 45% more notifications than iOS apps, often prioritizing engagement over utility. Notifications act as a ‘hook’ to drive app usage and data collection.” Source: Proceedings of the ACM on Human-Computer Interaction https://dl.acm.org/doi/10.1145/3411764.3445589

How can this be used nefariously?

Let's say you are a person who believes in Truth and who searches all over the net for truth. You find some things which are true. You post it somewhere. And you are taken down. You accept it since this is ONLY one time.

But, this is where YOU ARE WRONG.

THEY can easily know your IDs - specifically your advertising ID, or else one of the above. They send this to Google to know which all EMAIL accounts are associated with these IDs. With 99.9% accuracy, AI can know the correct Email because your EMAIL and ID would have SIMULTANEOUSLY logged into Google thousands of times in the past.

Then they can CENSOR you ACROSS the internet - YouTube, Reddit, etc. - because they know your ID. Even if you change your mobile, they still have other IDs like your email, etc. You can't remove all of them. This is how they can use this for CENSORING. (They will shadow ban you, you wont know this.)

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Thank you for that I have been testing it right now and trying to get the full articles at once through the rules if any, unable to find a solution yet, will search further if I could get the full articles at once for offline reading

Update: Unable to get the full article. other than the original page view

Followed the following info, it doesn't always work

Convert Partial Articles in Feeds to Full-Text Articles

Feedbro has built-in engine for transforming partial text feed articles
 to full-text articles. This feature is not automatically on for all feeds 
and must be enabled per feed. 
To do that, right-click the feed in the feed tree and select 
Properties. Then use "Feed Entry Content" for adjusting full text 
extraction settings. Click "Preview" button to check that 
you get desired results and then press "Save".

Note that obviously full-text option should not be used 
if the feed already 
provides full articles. Also the full-text conversion doesn't 
work for all feeds and sites but it works pretty well 
for majority of sites. 

so far the only one which has been suggested is Chaski is the one I like ,

but you need to be online to retrieve the full article one by one first, then it lets you read the full article offline

Nothing like a podcast player which always downloads the full article, to be read offline

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Thank you, I have just tried that. Fluent Reader doesn't cache

so far the only one which has been suggested is Chaski is the one I like ,

but you need to be online to retrieve the full articles one by one first, then it lets you read the full article offline

Nothing like a podcast player which always downloads the full article, to be read offline

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Thank you , so far the only one which has been suggested is Chaski is the one I like ,

but you need to be online to retrieve the full articles one by one first, then it lets you read the full article offline

Nothing like a podcast player which always downloads the full article, to be read offline

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

I agree, so far the only one which has been suggested is Chaski is the one I like , but you need to be online to retrieve the full articles one by one first, then it let you read the full article offline Nothing like a podcast player which always downloads the full article, to be read offline

 

Need Your Suggestions: RSS Reader for Windows PC

I have been happy with a podcast player's feed reader on my Android for some time,

but I am about to give up because its screen size makes it difficult to read long articles and need an app for windows PC (getting the full text then let me read them offline)

I would appreciate your guidance on the best recommended RSS readers for Windows PC that are:

-Visually good app  for a Windows Laptop
-Able to get the feeds with full text then let me read them offline
 

Need Your Suggestions: RSS Reader for Windows PC

I have been happy with a podcast player's feed reader on my Android for some time,

but I am about to give up because its screen size makes it difficult to read long articles and need an app for windows PC (getting the full text then let me read them offline)

I would appreciate your guidance on the best recommended RSS readers for Windows PC that are:

Visually good app  for a Windows Laptop

Able to get the feeds with full text then let me read them offline
 

Ref: https://www.rottentomatoes.com/tv/zero_day

Started watching the series the other day and completed…

Here’s a short comment I have come across, I kind of agree somehow

Zero Day Netflix Series and the New America by M. M SAGMAN

The Zero Day series highlights the dangers of a “new America” through its plot and themes. Released by Netflix shortly after Trump’s re-election, the six-episode series features Robert De Niro as G. Mullen, a former president leading an investigation into a nationwide cyber attack. Mullen, portrayed as a patriotic and intelligent figure, faces moral dilemmas as the commission he heads prioritizes private law, allowing controversial decisions in a crisis.

The series also critiques the relationship between capital, media, and politics, exemplified by the character of President Mitchell, who embodies a mix of Obama and Harris. The narrative reveals how political figures, including Mullen’s daughter, navigate ethical challenges amid a backdrop of systemic issues, suggesting that the American dream often masks deeper problems.

While the series addresses the cyber attack as a societal crime, it emphasizes the rise of fascism as a more pressing concern. Mullen’s character reflects the complexities of leadership, as he grapples with personal loss and moral integrity. Ultimately, Zero Day presents a narrative that critiques the American political landscape while reinforcing the notion of the American dream, albeit through a flawed lens.

Despite its engaging premise, the series sacrifices truth for fiction, simplifying complex issues and portraying individual actors as the sole sources of systemic problems. This approach risks obscuring the broader capital-centered networks that shape American society and its global actions.

M. M SAGMAN He is a PhD student in Sociology. He has been actively involved in various civil society organizations. He worked as an editor for a while. He is married and has 2 children.

I like the review above more than the series itself and would currently rate the series no higher than 7 out of 10.

[–] [email protected] 1 points 2 weeks ago

This is the only one way I managed to get it done Thank you

 

Join this tactical, practical, and heretical discussion between Meredith Whittaker, President of Signal and leading advocate for secure communication, and Guy Kawasaki, host of the Remarkable People podcast

[–] [email protected] 1 points 1 month ago (2 children)

tried all those already, none seems to be working for the portable Have you actually set up the portable librewolf yourself as a default browser?

 

Appreciate your help please

 

FBI Warns iPhone, Android Users—We Want ‘Lawful Access’ To All Your Encrypted Data By Zak Doffman, Contributor. Zak Doffman writes about security, surveillance and privacy. Feb 24, 2025

The furor after Apple removed full iCloud security for U.K. users may feel a long way from American users this weekend. But it’s not — far from it. What has just shocked the U.K. is exactly what the FBI told me it also wants in the U.S. “Lawful access” to any encrypted user data. The bureau’s quiet warning was confirmed just a few weeks ago.

The U.K. news cannot be seen in isolation and follows years of battling between big tech and governments over warranted, legal access to encrypted messages and content to fuel investigations into serious crimes such as terrorism and child abuse.

As I reported in 2020, “it is looking ever more likely that proponents of end-to-end security, the likes of Facebook and Apple, will lose their campaign to maintain user security as a priority.” It has taken five years, but here we now are.

The last few weeks may have seemed to signal a unique fork in the road between the U.S. and its primary Five Eyes ally, the U.K. But it isn’t. In December, the FBI and CISA warned Americans to stop sending texts and use encrypted platforms instead. And now the U.K. has forced open iCloud to by threatening to mandate a backdoor. But the devil’s in the detail — and we’re fast approaching a dangerous pivot.

While CISA — America’s cyber defense agency — appears to advocate for fully secure messaging platforms, such as Signal, the FBI’s view appears to be different. When December’s encryption warnings hit in the wake of Salt Typhoon, the bureau told me while it wants to see encrypted messaging, it wants that encryption to be “responsible.”

What that means in practice, the FBI said, is that while “law enforcement supports strong, responsibly managed encryption, this encryption should be designed to protect people’s privacy and also managed so U.S. tech companies can provide readable content in response to a lawful court order.” That’s what has just happened in the U.K. Apple’s iCloud remains encrypted, but Apple holds the keys and can facilitate “readable content in response to a lawful court order.”

There are three primary providers of end-to-end encrypted messaging in the U.S. and U.K. Apple, Google and Meta. The U.K. has just pushed Apple to compromise iMessage. And it is more than likely that “secret” discussions are also ongoing with the other two. It makes no sense to single out Apple, as that would simply push bad actors to other platforms, which will happen anyway, as is obvious to any security professional.

In doing this, the U.K. has changed the art of the possible, bringing new optionality to security agencies across the world. And it has done this against the backdrop of that U.S. push for responsible encryption and Europe’s push for “chat control.” The U.K has suddenly given America’s security agencies a precedent to do the same.

“The FBI and our partners often can’t obtain digital evidence, which makes it even harder for us to stop the bad guys,” warned former director Christopher Wray, in comments the bureau directed me towards. “The reality is we have an entirely unfettered space that’s completely beyond fully lawful access — a place where child predators, terrorists, and spies can conceal their communications and operate with impunity — and we’ve got to find a way to deal with that problem.”

The U.K. has just found that way. It was first, but unless a public backlash sees Apple’s move reversed, it will not be last. In December, the FBI’s “responsible encryption” caveat was lost in the noise of Salt Typhoon, but it shouldn’t be lost now. The tech world can act shocked and dispirited at the U.K. news, but it has been coming for years. While the legalities are different in the U.S., the targeted outcome would be the same.

Ironically, because the U.S. and U.K. share intelligence information, some American lawmakers have petitioned the Trump administration to threaten the U.K. with sanctions unless it backtracks on the Apple encryption mandate. But that’s a political view not a security view. It’s more likely this will go the other way now. As EFF has warned, the U.K. news is an “emergency warning for us all,” and that’s exactly right.

“The public should not have to choose between safe data and safe communities, we should be able to have both — and we can have both,” Wray said. “Collecting the stuff — the evidence — is getting harder, because so much of that evidence now lives in the digital realm. Terrorists, hackers, child predators, and more are taking advantage of end-to-end encryption to conceal their communications and illegal activities from us.”

The FBI’s formal position is that it is “a strong advocate for the wide and consistent use of responsibly managed encryption — encryption that providers can decrypt and provide to law enforcement when served with a legal order.”

The challenge is that while the bureau says it “does not want encryption to be weakened or compromised so that it can be defeated by malicious actors,” it does want “providers who manage encrypted data to be able to decrypt that data and provide it to law enforcement only in response to U.S. legal process.”

That’s exactly the argument the U.K. has just run.

Somewhat cynically, the media backlash that Apple’s move has triggered is likely to have an impact, and right now it seems more likely we will see a reversal of some sort of Apple’s move, rather than more of the same. The UK government is now exposed as the only western democracy compromising the security for tens of millions of its citizens.

Per The Daily Telegraph, “the [UK] Home Office has increasingly found itself at odds with Apple, which has made privacy and security major parts of its marketing. In 2023, the company suggested that it would prefer to shut down services such as iMessage and FaceTime in Britain than weaken their protections. It later accused the Government of seeking powers to 'secretly veto’ security features.”

But now this quiet battle is front page news around the world. The UK either needs to dig in and ignore the negative response to Apple’s forced move, or enable a compromise in the background that recognizes the interests of the many.

As The Telegraph points out, the U.S. will likely be the deciding factor in what happens next. “The Trump administration is yet to comment. But [Tim] Cook, who met the president on Thursday, will be urging him to intervene,” and perhaps more interestingly, “Elon Musk, a close adviser to Trump, criticised the UK on Friday, claiming in a post on X that the same thing would have happened in America if last November’s presidential election had ended differently.”

Former UK cybersecurity chief Ciaran Martin thinks the same. “If there’s no momentum in the U.S. political elite and US society to take on big tech over encryption, which there isn’t right now, it seems highly unlikely in the current climate that they’re going to stand for another country, however friendly, doing it.”

Meanwhile the security industry continues to rally en masse against the change.

“Apple’s decision,” an ExpressVPN spokesperson told me, “is deeply concerning. By removing end-to-end encryption from iCloud, Apple is stripping away its UK customers’ privacy protections. This will have serious consequences for Brits — making their personal data more vulnerable to cyberattacks, data breaches, and identity theft.”

It seems inconceivable the UK will force all encrypted platforms to remove that security wrap, absent which the current move becomes pointless. The reality is that the end-to-end encryption ship has sailed. It has becomne ubiquitous. New measures need to be found that will rely on metadata — already provided — instead of content.

Given the FBI’s stated position, what the Trump administration does in response to the UK is critical. Conceivably, the U.S. could use this as an opportunity to revisit its own encryption debate. That was certainly on the cards under a Trump administration pre Salt Typhoon. But the furor triggered by Apple now makes that unlikely. However the original secret/not secret news leaked, it has changed the dynamic completely.

[–] [email protected] 7 points 1 month ago (1 children)

British Soldiers told to stop using the Whatsapp and use Signal instead of WhatsApp for security

George Grylls, Political Reporter Monday March 21 2022, 5.00pm GMT, The Times

British soldiers have been told to stop using Whatsapp over fears that Russia is intercepting their messages BENOIT TESSIER/REUTERS

British soldiers are being encouraged to use the Signal messaging app instead of WhatsApp, amid reports that Russian forces used insecure UK numbers to direct airstrikes in Ukraine.

Signal has a higher level of encryption than WhatsApp.

Military sources said that secure channels should be used to discuss sensitive matters but denied that the advice had been issued in response to security breaches resulting from the use of British phones in Ukraine.

https://www.thetimes.com/article/soldiers-told-to-use-signal-instead-of-whatsapp-for-security-6pxh9z5cx

 

by Lars Wilderang, 2025-02-11

Translation from the Swedish Origin

In a new instruction for fully encrypted applications, the Swedish Armed Forces have introduced a mandatory requirement that the Signal app be used for messages and calls with counterparts both within and outside the Armed Forces, provided they also use Signal.

The instruction FM2025-61:1, specifies that Signal should be used to defend against interception of calls and messages via the telephone network and to make phone number spoofing more difficult.

It states, among other things:

“The intelligence threat to the Armed Forces is high, and interception of phone calls and messages is a known tactic used by hostile actors. […] Use a fully encrypted application for all calls and messages to counterparts both within and outside the Armed Forces who are capable of using such an application. Designated application: The Armed Forces use Signal as the fully encrypted application.”

The choice of Signal is also justified:

“The main reason for selecting Signal is that the application has widespread use among government agencies, industry, partners, allies, and other societal actors. Contributing factors include that Signal has undergone several independent external security reviews, with significant findings addressed. The security of Signal is therefore assumed to be sufficient to complicate the interception of calls and messages.

Signal is free and open-source software, which means no investments or licensing costs for the Armed Forces.”

Signal supports both audio and video calls, group chats, direct messages, and group calls, as well as a simple, event-based social media feature.

The app is available for iPhone, iPad, Android, and at least desktop operating systems like MacOS, Windows, and Linux.

Since Signal can be used for phone calls, the instruction is essentially an order for the Armed Forces to stop using regular telephony and instead make calls via the Signal app whenever possible (e.g., not to various companies and agencies that don’t have Signal), and no SMS or other inferior messaging services should be used.

Note that classified security-protected information should not be sent via Signal; this is about regular communication, including confidential data that is not classified as security-sensitive, as stated in the instruction. The same applies to files.

The instruction is a public document and not classified.

Signal is already used by many government agencies, including the Government Offices of Sweden and the Ministry for Foreign Affairs. However, the EU, through the so-called Chat Control (2.0), aims to ban the app, and the Swedish government is also mulling a potential ban, even though the Armed Forces now consider Signal a requirement for all phone calls and direct messaging where possible.

Furthermore, it should be noted that all individuals, including family and relationships, should already use Signal for all phone-to-phone communication to ensure privacy, security, verified, and authentic communication. For example, spoofing a phone number is trivial, particularly for foreign powers with a state-run telecom operator, which can, with just a few clicks, reroute all mobile calls to your phone through a foreign country’s network or even to a phone under the control of a foreign intelligence service. There is zero security in how a phone call is routed or identified via caller ID. For instance, if a foreign power knows the phone number of the Swedish Chief of Defence’s mobile, all calls to that number could be rerouted through a Russian telecom operator. This cannot happen via Signal, which cannot be intercepted.

Signal is, by the way, blocked in a number of countries with questionable views on democracy, such as Qatar (Doha), which can be discovered when trying to change flights there. This might serve as a wake-

https://cornucopia.se/2025/02/forsvarsmakten-infor-krav-pa-signal-for-samtal-och-meddelanden/

[–] [email protected] 1 points 1 month ago (1 children)

Recent News: If VPNs are targeted, cloud accounts could be compromised too Massive brute force attack uses 2.8 million IPs to target VPN devices https://www.bleepingcomputer.com/news/security/massive-brute-force-attack-uses-28-million-ips-to-target-vpn-devices/

 

Dear Friends,

I just wanted to take a moment to sincerely thank you everyone for your incredibly thoughtful and detailed responses for the films in general, while I find myself in a difficult situation when it comes to safeguarding the PERSONAL FAMILY PHOTOS and VIDEOS.

  • On one hand, if I choose to store them online/cloud encrypted / (edit: encrypt first then upload it), I face significant privacy concerns. While they might be secure now, there’s always the potential for a very near future breaches or compromises, especially with the evolving risks associated with AI training and data misuse.

The idea of the personal moments being used in ways I can’t control or predict is deeply unsettling.

  • On the other hand, keeping these files offline doesn’t feel like a perfect solution either. There are still considerable risks of losing them due to physical damage, especially since I live in an area prone to earthquakes. The possibility of losing IRREPLACEABLE MEMORIES due to natural disasters or other unforeseen events is always a WORRY.

How can I effectively balance these privacy, security, and physical risks to ensure the long-term safety and integrity of the FAMILY’S PERSONAL MEMORIES?

Are there strategies or solutions that can protect them both digitally and physically, while minimizing these threats?

 

How do you ensure privacy and security on cloud platforms in an age of compromised encryption, backdoors, and AI-driven hacking threats to encryption and user confidentiality?

Let’s say you’ve created a film and need to securely upload the master copy to the cloud. You want to encrypt it before uploading to prevent unauthorized access. What program would you use to achieve this?

Now, let’s consider the worst-case scenario: the encryption software itself could have a backdoor, or perhaps you're worried about AI-driven hacking techniques targeting your encryption.

Additionally, imagine your film is being used to train AI databases or is exposed to potential brute-force attacks while stored in the cloud.

What steps would you take to ensure your content is protected against a wide range of threats and prevent it from being accessed, leaked, or released without your consent?

view more: next ›