Wikipedia:Reference desk/Archives/Computing/2011 June 9
Computing desk | ||
---|---|---|
< June 8 | << May | June | Jul >> | June 10 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 9
editLaptop wont recognize FAT32 Flash Drives
editHi All,
Weird, it just started happening recently. Anytime any FAT32 flash drive is inserted I am told it needs to be formatted before use (tried with 4 drives from different people). Running chkdsk reveals no problem found, output below:
C:\>chkdsk g: The type of the file system is FAT32. Volume Serial Number is E68E-CB52 Windows is verifying files and folders... File and folder verification is complete. Windows has checked the file system and found no p 8,002,240 KB total disk space. 8 KB in 2 hidden files. 1,012 KB in 230 folders. 3,664,296 KB in 1,559 files. 4,336,920 KB are available. 4,096 bytes in each allocation unit. 2,000,560 total allocation units on disk. 1,084,230 allocation units available on disk.
trying to change path to g: using cmd informs me I need to install a driver for it? Anyone have a clue what's going on and how to fix it? TIA PrinzPH (talk) 01:20, 9 June 2011 (UTC)
ADDENDUM: I am running Vista, 64bit PrinzPH (talk) 01:22, 9 June 2011 (UTC)
- Did you by any chance install an update recently for Vista, after which this problem occurred ? StuRat (talk) 07:26, 11 June 2011 (UTC)
Regarding archiving on CD/DVDs
editHello! I have a couple questions before backing up a bunch on files on CDs and DVDs.
- In the past, I've used a Sharpie to label the disk, but after some Googling, some people say that's okay and others that it's a horrible idea. Do I need to buy a special marker for this purpose?
- DVDs have a capacity of 4.7GB, but when I burn a lot of files through Windows Vista (with the "Master burn option," which yields a UDF-formatted disk), it caps me off at 4.37GB. I know that some data is reserved for filesystem information on the DVD, but over 300MB for 4.37GB of data? What accounts for this?
- Using Windows Vista's default application for burning DVDs (whatever it calls when you right-click a selection of files and send them to your DVD-Recording drive), I've had several problems in the past and present. Just now I had it set up to burn about 4.36GB, then it told me the burn was successful but when I took the DVD out, the optical side looked only half-written, and Explorer freezes when it tries to read it. Can anyone suggest CD/DVD-writing software that is better suited for backups and more reliable?
Thanks for your help.--el Aprel (facta-facienda) 05:45, 9 June 2011 (UTC)
- Optical media capacity is always quoted in decimal prefix whereas Windows always uses binary prefix so you aren't missing any capacity. Also with DVDs the metal layer is under a layer of polycarbonate so even in the unlikely event the solvents or ink can cause damage to the metal, it's going to have to bleed a long way before it can. In that vein, I would completely ignore the comments of anyone who doesn't make a distinction between CDs and DVDs Nil Einne (talk) 13:01, 9 June 2011 (UTC)
Infrarecorder is pretty good, I've certainly never had any problems with it. As for the Sharpie thing, I'd agree with Einee about the solvents, but again I've never had nay problems using Sharpies on CD/DVDs. Touch wood. Jackacon (talk) 14:26, 9 June 2011 (UTC)
Forgot to include a link! — Preceding unsigned comment added by Jackacon (talk • contribs) 14:27, 9 June 2011 (UTC)
- Of course a Sharpie will write on a disc, but the point is that you want to be able to read the disc back in the future. I've read that alcohol in standard markers is bad for discs, but I don't know if that is true. However, Sharpie and others make markers designed for that (e.g. the Sharpie CD/DVD Marker), so might as well use them. Remember - it is the top surface that you most don't want to damage. Bubba73 You talkin' to me? 16:05, 9 June 2011 (UTC)
- No, as I pointed out this isn't true for DVDs. Damaging the bottom surface is in fact clearly worse. (Since if you do something that penetrates and damages the metal you're screwed either side, but if you do something which damages the polycarbonate enough to prevent the laser properly reading the pits this is only a problem on the bottom. Obviously we're not talking about double sided DVDs where top and bottom are meaningless.) Nil Einne (talk) 17:36, 9 June 2011 (UTC)
- Y'mean "Damaging the top surface" there? ¦ Reisio (talk) 07:08, 10 June 2011 (UTC)
Imgburn AvrillirvA (talk) 16:36, 9 June 2011 (UTC)
Nil Einne, regarding your point about the Sharpie, you said DVDs have a metallic layer that would likely block any solvents from breaking through to the bottom layer, but is the same true for CDs? If not, what would you suggest for labeling them instead? (I'm guessing CDs don't have the same protection, because I recently bought a pack of Verbatims, wrote in permanent marker on the top, and could see through to the permanent marker when holding it upside-down up to the light. It still plays fine [for now], but it makes me a bit concerned.)--el Aprel (facta-facienda) 20:11, 9 June 2011 (UTC)
- Also I suggest using high-quality discs. I use Taiyo Yuden/JVC and I think Mistui are also good. Bubba73 You talkin' to me? 20:20, 9 June 2011 (UTC)
- DVDs have a polycarbonate layer on the bottom, then a reflective recording layer, then another polycarbonate layer, then the label on top. CDs have a polycarbonate layer on the bottom, then a reflective recording layer, then the label is directly on top of the recording layer. It's the lack of an upper polycarbonate layer that's a problem when writing on CD labels. --Carnildo (talk) 00:55, 10 June 2011 (UTC)
- If you intend to archive valuable data on optical media, you may want to consider using scratch-resistant Imation Forcefield CD-Rs and Forcefield DVD±Rs. I've found them to be quite durable. Rocketshiporion♫ 00:04, 15 June 2011 (UTC)
Knowledge Management System of an organization using RDF in Semantic Web
editDear Sir/Mam,
How can I design a Knowledge Management System of an organization like an University using RDF in the aspect of Semantic Web. — Preceding unsigned comment added by Subhakoley (talk • contribs) 06:08, 9 June 2011 (UTC)
Why THAT password?
editAccording to this article the passwords
- seinfeld, password, winner, 123456, purple, sweeps, contest, princess, maggie, 9452, peanut, shadow, ginger, michael, buster, sunshine, tigger, cookie, george, summer, taylor, bosco, abc123, ashley, bailey are often used.
- What ist the "reason" for choosing 9452 ??? Greetings from France! Grey Geezer 13:21, 9 June 2011 (UTC) — Preceding unsigned comment added by Grey Geezer (talk • contribs)
- http://www.irs.gov/pub/irs-pdf/f9452.pdf ? everybody pays taxes... but that bearded "good" guy in human target... --Homer Landskirty (talk) 13:30, 9 June 2011 (UTC)
- Thank's for the answer. Love "the bearded guy", too. Never take cheques either ... ;-) Grey Geezer 13:33, 9 June 2011 (UTC) — Preceding unsigned comment added by Grey Geezer (talk • contribs)
- It could be related to whatever the people were signing up for. For example, the password "kasparov" might be common on a chess site, but not elsewhere. —Tommyjb Talk! (13:37, 9 June 2011)
- The password's owner might have been born on 1994/5/2, have a child born then, or have some other reason for choosing that date. CS Miller (talk) 13:48, 9 June 2011 (UTC)
- The original article here states that these were apparently user/pass values for a competition or sweepstakes of some kind. There must have been something on the signup form that caused the users to lean towards 9452. -- kainaw™ 13:54, 9 June 2011 (UTC)
- Meaning they saw something (permanently on that page), so they could remember it easily later? Grey Geezer 14:10, 9 June 2011 (UTC) — Preceding unsigned comment added by Grey Geezer (talk • contribs)
how does NAT traversal work? (or busting through a firewall)
edithi guys. how does NAT traversla work (or busting through a firewall). someone said that they thought hte reason skype really succeeded was because it worked from behind corporate firewalls (even though it was a peer to peer technology) -- how is that? what did it do to get through, or did all traffice from behind two firewalls have to go through skype's servers, after being initiated from the skype client? that seems like an awful lot of traffic to go through skype's servers... thanks for any responses or insight you might have on this point of curiosity. 87.194.221.239 (talk) 17:02, 9 June 2011 (UTC)
- p2p networks work with NAT as long as a sufficient number of nodes in the network are not behind a NAT (or have port forwarding set up, so the NAT isn't blocking inbound connections to them). The networks use these supernodes which proxy connections from one NATted client to another. This post discusses Skype and supernodes, and speculates that, rather than relying on ordinary users running supernodes (as p2p file sharing networks do) Skype Inc. actually runs it's own supernodes as well. -- Finlay McWalter ☻ Talk 18:52, 9 June 2011 (UTC)
- Most P2P clients including Skype using a variety of techniques including what FM has mentioned but also other things. Start with NAT traversal and read linked articles like UDP hole punching and TCP hole punching Nil Einne (talk) 21:25, 9 June 2011 (UTC)
- If the NAT device supports Universal Plug and Play (UPnP), then Skype can ask for its standard to be forwarded to it. NB UPnP is not related to the similarly named plug and play for hardware detection in PCs. CS Miller (talk) 22:47, 9 June 2011 (UTC)
- Generally, firewall busting exploits an assumption in stateful NAT devices, that when one device communicates to an external address from a particular port, any response received from that address, to the particular port on the NAT device is most likely intended for receipt by the original device. For example, consider a topology where there is a NAT device with the public address 4.4.4.4, and the private address 10.0.0.1, and everything behind the firewall is in the 10.0.0.1/24 network. When a device at, say, 10.0.0.16 transmits data from it's own port 500 through the NAT device to the external address 8.8.8.8, the NAT device will assume for some period of time thereafter that data received on the public interface addressed to port 500 from 8.8.8.8 is, in fact, intended for receipt by the internal device 10.0.0.16. You might also review STUN for more information. 24.177.120.138 (talk) 00:00, 12 June 2011 (UTC)
NSA and Cryptography
editReading a bit about modern cryptography, I am wondering that as far as we know, the popular cryptographic algorithms (like RSA with primes large enough or AES) are secure. They have been intensely scrutinized and studied. But we don't know much about the resources at NSA. Could it be the case that we (the public) think that an algorithm is secure but in reality the NSA might have broken it? How sure can we be that our messages are secure even from someone with (practically) infinite resources like the NSA (compared to an average user like me)? I mean they do have the best minds. What if they have some new cryptanalysis techniques we don't know about (like when differential cryptanalysis was discovered) or a massive massive computer that can bruteforce something "infeasible"? This is just an opinion question and just wondering what the computing community here thinks? -Looking for Wisdom and Insight! (talk) 17:30, 9 June 2011 (UTC)
- publicly known approaches take very long time on normal computers... even if overclocked and MP... :-)
- there might be better algorithms and/or better hardware (somewhen)... but: since research at universities is public and leading, it would be very funny, if the government would have a "secret university", that is so much better than regular universities (why would they pay for all those professors that r not as good as those at that "secret university")...
- i think the government uses different approaches to prevent severe crimes (like undercover cops)... and really bad criminals (like spies) possibly use unbreakable cryptography (like one time pad)... --Homer Landskirty (talk) 18:28, 9 June 2011 (UTC) e. g.: in f. rep. germany the military police repeatedly questions their employees about their friends in school... --Homer Landskirty (talk) 18:32, 9 June 2011 (UTC)
- For things like passwords on common home computers, the government certainly has the resources to crack any normal person's password in a matter of hours (if not minutes). As has been shown multiple times recently, anyone with about $1,000 in hardware and some open-source free software can crack an 8-character password in far less than a day. The key is a distributed brute-force attack. Assume that I tell you it will take 1,000 years to crack some encryption with a brute force attack. So, you distribute the attack among 1,000 computers and do it in a single year. Or, you distribute it among 10,000 computers and crack it in 1/10th of a year. So, consider the NSA. What if they have a warehouse with 1,000,000 computers set up to do a distributed brute force attack? Well, we wouldn't know about it. But, here is where it gets a bit scary... Do we know ANYONE who has over 1,000,000 computers waiting to do his or her dirty work? Yes. Every day, thousands of people ignorantly (and stupidly) install malware on their home computers. That malware puts the computer under the control of a criminal who has the ability to use the computer for anything he or she likes. It could be something simple like spamming the world with ads for fake viagra. It could be something like cracking Sony passwords (assuming they've found a database where Sony actually encrypted the passwords to begin with). The real threat to the common person is not the NSA, it is the criminals who are behind the malware programs that make so much money that they can blatantly run ads on television that tell people to install the malware so your computer will run faster. -- kainaw™ 18:37, 9 June 2011 (UTC)
- i would guess a brute force attack against a 2048bit PGP key takes much more than 1000 years on normal hardware (even on a farm of 1 million boxes)... http://www.rsa.com/rsalabs/node.asp?id=2103 --Homer Landskirty (talk) 19:14, 9 June 2011 (UTC)
- Notice that those contests are no longer active. I believe it is because they don't want people to work on using GPUs to crack them. When they quote "it takes a computer XXX years to crack this", they mean "a standard single-core single-processor Intel/AMD computer." By comparison, a modern single CPU can brute force about 10 million crack attempts per second. A modern single GPU can brute force about 3.3 billion crack attempts per second. Consider a computer with a 4-GPU graphics card. With simple distribution, you can achieve 12 billion crack attempts for second. So, the GPU clearly outperforms the CPU in this case. Further, claims of 1,000 years almost always ignore improvement in the hardware. Assume that we are using a GPU and the GPU doubles in speed of brute force attack every 10 years. So, after 10 years, we aren't looking at 990 years. We are looking at under 500 years. Another 10 years and we are looking at under 250 years. Another 10 years and we are looking at under 125 years. In all reality, that 1,000 years will take around 50 years. -- kainaw™ 19:27, 9 June 2011 (UTC)
- What is a "crack attempt"? Attacks on RSA use the general number field sieve or other complex multistage factoring algorithms, not trial division. The idea that GPUs are enough faster than CPUs to make RSA suddenly insecure is ridiculous. Ditto AES or Triple DES or any other widely used cipher primitive. It is physically impossible for speeds to keep doubling as you imagine. I don't know why RSA Inc. decided to stop giving money away, but my guess is it was no longer generating enough positive PR to justify the expense. Stop spreading FUD on the reference desk, please. -- BenRG (talk) 21:45, 9 June 2011 (UTC)
- Notice that those contests are no longer active. I believe it is because they don't want people to work on using GPUs to crack them. When they quote "it takes a computer XXX years to crack this", they mean "a standard single-core single-processor Intel/AMD computer." By comparison, a modern single CPU can brute force about 10 million crack attempts per second. A modern single GPU can brute force about 3.3 billion crack attempts per second. Consider a computer with a 4-GPU graphics card. With simple distribution, you can achieve 12 billion crack attempts for second. So, the GPU clearly outperforms the CPU in this case. Further, claims of 1,000 years almost always ignore improvement in the hardware. Assume that we are using a GPU and the GPU doubles in speed of brute force attack every 10 years. So, after 10 years, we aren't looking at 990 years. We are looking at under 500 years. Another 10 years and we are looking at under 250 years. Another 10 years and we are looking at under 125 years. In all reality, that 1,000 years will take around 50 years. -- kainaw™ 19:27, 9 June 2011 (UTC)
- Let's look at what we do know (before we get onto the fast amount of stuff we don't):
- We know that NSA (and their hand-in-glove cousins at GCHQ) are major employers of mathematicians (in this story at Government Executive magazine they're called "probably the largest employer of mathematicians in the United States"). We really don't know what they do, but they don't publish in ordinary mathematical journals. We know Cliff Cocks invented asymmetric cryptography ages before Diffie and Hellman, and the design of DES' S-boxes is strong circumstantial evidence that NSA was aware of differential cryptanalysis long before Biham and Shamir. We know they design government ciphers for military work and for non-secret applications (like Skipjack and Rambutan). Beyond all that we have no idea what all those people are doing.
- We know NSA has huge datacentres (e.g.). Some of that is plainly for bulk cryptanalysis, some clearly for HARVEST/ECHELON/ThinThread. But we don't know, and there's no way of finding out what, and in what proportion. We know they're very comfortable with custom ASICs as well as general-purpose computing.
- We know from the history of GCCS (Bletchley) and SIS (Friedman et al) that these guys revel in sneaky way of breaking into things; of exploiting non-random characteristics of things, human biases and statistical weaknesses. So, like any decent cryptanalyst, they're lazy - they'll do whatever they can to avoid or minimise brute force. Attacking RSA or AES head-on is the last thing they'll try.
- So we don't know (and your guess is as good as anyones):
- ... if there are broad technical weaknesses in modern ciphers like AES which NSA and GCHQ are aware of and the public community isn't. The public community doesn't think so, but is probably outnumbered (and vastly outspent) by the secret community. And the "lots of clever people have studied it" idea shouldn't be that reassuring - lots of clever people tried to solve Fermat's last theorem, but their failure doesn't imply that it was impossible. One notable thing is that, as far as we can tell, the ciphers that NSA and GCHQ makes for their own government "customers" to use appear all to be LFSRs. Maybe they've got good evidence that LFSRs are more secure, or maybe they're institutionally a collection of reactionary Blimps. There's no way of knowing.
- ... what clever japes they have for avoiding having to do hard work. Maybe AES leaks key material in some subtle way. Maybe the PRNG that GPG uses has some massive statistical bias that no-one has noticed. Maybe the probabilistically-prime numbers RSA generates as keys aren't as probably prime as you'd hope.
- Combining a modest weakness in an algorithm with a modest weakness in its implementation (from timing analysis to poor random numbers) with massive computing power and rooms full of custom ASICs, they may very well be able to drill into actual implementations of modern cryptosystems. Given the money they take up, they darn well should be able to do so. But we don't know, and can only begin to guess.
- For governments (in general) interested in individuals, we know that strong crypto is probably one of the harder points for them to attack. It's much easier to:
- Sneak into your house and put a keylogger into your computer, or plant a virus there. Like Magic Lantern, for example.
- Plant some child porn on your laptop, or some heroin in your bag when you check it at the airport. Then they have plenty of leverage over you.
- Put a gun to your head.
- Beyond that, we're into black-helicopter land. I rather doubt it, but maybe Big Brother can:
- read your keyboard's usb traffic using its Van Eck emissions with a SQUID on an NRO satellite in low orbit, or on a drone.
- activate the firmware built into the southbridge of your computer, to inject polymorphic surveillance code into your PC (there was much of a stooshie a few years ago about such "Chinese Spy Chips" allegedly existing)
- So really we've no way of knowing what NSA is doing, or what it can do. Electronic communications security is absolutely their home ground and betting that anyone (bar another such agency, and even then) can best them there is unwise. Realising that is one thing that kept Osama alive as long as he lasted. But as Kainaw points out, the NSA really aren't your problem, and if they are, no amount of crypto is really going to help. -- Finlay McWalter ☻ Talk 20:01, 9 June 2011 (UTC)
- To Finlay McWalter's list of bullet points, I can only contribute a link to this xkcd cartoon. Comet Tuttle (talk) 17:03, 10 June 2011 (UTC)
- Just on your final point (in a very nice post), I would point out something similar: the problems with the NSA are not that they can potentially crack crypto. The problem is that they can monitor all electronic communication without (apparently) any legal checks.
- On the "secret university" mentioned above, we do indeed have plenty of these in the US; they are called the national laboratory system. The US spends gobs of money on secret R&D every year. Much of it has evolved to be, essentially, a "secret university" system in many ways, including classified journals and other "black" versions of what are traditionally academic structures. --Mr.98 (talk) 20:51, 9 June 2011 (UTC)
- Let me add some anti-fear here:
- Differential cryptanalysis was discovered independently by IBM and NSA researchers. The IBM researchers agreed to keep it secret in the national interest. I don't think that would happen today; the community now is much more open (against security by obscurity) and much more international. It's also much more active; there's no reason to think the NSA has the multi-year lead now that it did then.
- In addition to breaking foreign crypto, the NSA is supposed to approve unbreakable crypto for use by US agencies. They currently approve AES-128 for secret information and AES-192 and AES-256 for top secret. I have enough confidence in the system to believe that any NSA executive who would choose to do that, while knowing that the NSA can break AES, would be fired and replaced by someone more competent.
- Skipjack, developed by the NSA and later declassified, was successfully attacked (though not completely broken) by academic cryptographers, which suggests that the NSA's cryptographers aren't all that. Could the vulnerabilities have been deliberate? Sure, but that would have been foolish given that the NSA would have had access to the keys anyway. Vulnerabilities would only aid the enemy.
- If the NSA does have the ability to break AES or RSA, they won't use it against the likes of you. Unless you're the despotic ruler of a country with nuclear capability, nothing they might learn about you would be important enough to justify the risk of leaking a secret that valuable.
- -- BenRG (talk) 01:44, 10 June 2011 (UTC)
- .. And with all those people working for these organisations, there's probably a good chance that at least one of them might pass the information to Wikileaks or similar. (Cf Moon landing conspiracy theories - the thousands of people involved can't all be keeping the secret.) AndrewWTaylor (talk) 12:32, 10 June 2011 (UTC)
- "Vulnerabilities would only aid the enemy." Not so — they're perfect for giving false information out and having it appear to be legitimate. There are all sorts of ways in which apparent vulnerabilities have been used in the past in order to play a double game.
- And re: Wikileaks and conspiracies — there have been a few instances of whistleblowers (e.g. the Thomas Drake case, currently in progress), but if anyone is going to figure out who gave NSA information away, it's the NSA. (Do you think people at the NSA trust Wikileaks to keep them anonymous or use the data in the most efficient manner possible? I doubt it. I suspect they'd go to Congress, not Wikileaks.) As far as large scale "real" conspiracies go, the NSA so far pretty much takes the modern cake with things like ThinThread. Anyway, I wouldn't expect leaks unless there were people in the organization who knew what was being done on the very top levels (compartmentalization means that only a handful of your giant organization has to know the big picture) and that those people at the high level thought there was really compelling reason to destroy their own handiwork. It does exist (see Drake, again) but it's a relatively rare constellation of phenomena — the people who get into high levels of these kinds of organizations are people who have demonstrated their loyalty and their acceptance of the overall purpose of these organizations. The Drake cases are the exceptions, not the rule, because it's rare for high level people to torpedo their entire careers (and possibly get jail time) because of abstract values. The big State Dept. Wikileak was all stuff of relatively middling classification (secret but not top secret) that was accessible through a non-compartmentalized database. I think we can presume that the NSA has more sophisticated ways of divvying up its information, given that most of it is much higher classification.
- Even with the moon landings, one could imagine a technical setup in which only a few dozen people out of the thousands knew it was a cockup. In that case it would be admittedly hard, since doing the cockup would require almost as much technical work as the legitimate one. There are definitely different ranges as to what can be kept a secret and what cannot. But it's worth remembering that the Manhattan Project was kept pretty secret (in most respects) despite having 130,000 employees. The reason was simple: of those 130,000 people, only a few hundred knew what they were building (most were entirely ignorant of the ultimate aims of their research), and those few hundred were generally screened for their loyalty, watched by intelligence agents, and even physically isolated from the rest of the world. Now, the Manhattan Project also demonstrates the problems of this: a few of those hundred definitely "leaked" the information, and got through the security nets one way or another. But they did so only to an enemy power — not, say, to the press. So even having a "leaker" doesn't mean the secrets become "public" — it just means that someone unauthorized knows about them. But that person might keep them as secret (or more secret) than you might. (Also, I don't think the moon landings were a conspiracy, obviously. There are lots of better reasons to doubt it was one than the fact that it would have been hard to keep secret.) --Mr.98 (talk) 14:13, 10 June 2011 (UTC)
- .. And with all those people working for these organisations, there's probably a good chance that at least one of them might pass the information to Wikileaks or similar. (Cf Moon landing conspiracy theories - the thousands of people involved can't all be keeping the secret.) AndrewWTaylor (talk) 12:32, 10 June 2011 (UTC)
- Let me add some anti-fear here:
- As the original poster wrote, this is sort of an opinion question, or, more accurately, it's only answerable with guesses, because all the good sources we could research and cite are of course classified. As a counterweight to some of the above, I would caution against a faith-based belief that the NSA can crack or work around AES in some sneaky way — there is apparently a very widespread belief outside of the US that its CIA is close to omniscient and omnipotent, despite ample evidence to the contrary. Comet Tuttle (talk) 17:13, 10 June 2011 (UTC)
- Though the ability to predict/affect world outcomes is different than having sneaky technical solutions. American intelligence has long been good at sneaky technical things. It's the sort of thing that throwing lots of money and R&D actually can affect, unlike many of the CIA's issues. If I were to believe in any agency having miraculous powers over their area of expertise, it would be the NSA. I have considerably less faith that the CIA, DOE, FBI, etc., have such miracles at hand. The NSA and the crypto establishment have long been good at keeping secrets, long been good at compromising seemingly uncompromisable systems, and so on. I'm not sure there's any good reason to believe that the situation today is so much different than, say, decades past. The only real variable that has changed is that the public crypto community has gotten bigger, which does change the dynamic a bit, but it is still dwarfed by the secret crypto community. --Mr.98 (talk) 18:31, 10 June 2011 (UTC)
- Wouldnt it make sense for western intelligence to have some top-secret dedicated super-computers somewhere already set up to crack terrorists encoded communications, like the Colossus computers in the past? 2.97.219.191 (talk) 21:31, 10 June 2011 (UTC)
- Of course! But the problem with these modern algorithms under discussion is that they are theoretically uncrackable even with dedicated supercomputers, at least within a reasonable timeframe. That's what the question is asking: could there a special way around that limitation, or might they have machines that defy this fact? The answer is: maybe there are vulnerabilities the public world doesn't know about (seems possible, but as others have pointed out, the very openness of these algorithms makes them easier to theoretically evaluate), or maybe (more speculative) there are machines in use whose mode of operation either is unknown to the public domain or is only a nascent technology in the public domain (i.e. quantum computers). The latter seems like a much more speculative thing than the former (but not totally out of consideration — the history of US secret R&D is littered with instances of the classified tech world being 10 years or so ahead of the public domain). The NSA certainly does have top-secret dedicated supercomputers for this sort of thing; the question is whether they would help you in cracking something that is made with modern, "military-grade" cryptography (which is quite easily available at this point in time). --Mr.98 (talk) 22:45, 10 June 2011 (UTC)
- Keep in mind that AES and the rest of Suite B isn't approved for all classified information. There's Suite A for the most super-duper secret stuff. It's entirely plausible (IMO), that only Suite A contains the algorithms that even the NSA isn't able to crack. 24.177.120.138 (talk) 17:29, 11 June 2011 (UTC)
- Those articles are very interesting (I hope you won't mind that I fixed your red link, although someone should probably add a redirect for "Suite A"). On the NSA's website referenced by those articles, NSA does approve AES for TOP SECRET information, even though it's outside of Suite A (presumably): "AES with 256-bit keys, Elliptic Curve Public Key Cryptography using the 384-bit prime modulus elliptic curve as specified in FIPS PUB 186-3 and SHA-384 are required to protect classified information at the TOP SECRET level."--el Aprel (facta-facienda) 19:24, 11 June 2011 (UTC)
- Yeah, I got the redirect created now. I've always been confused about the purpose of Suite A, given that Suite B is good enough for all of the known levels of classification. My best guess is that it's a tell that there exists levels of classification beyond top secret, which are, themselves, classified. (Cue the black helicopters.) 24.177.120.138 (talk) 23:51, 11 June 2011 (UTC)
- Actually, the cynic in me has another explanation for Suite A. Parts of ECC are patent encumbered, which effectively forces implementers of Suite B algorithms for TOP SECRET data to pay royalties or risk lawsuits. Suite A might be the algorithms not so encumbered, and the secrecy surrounding it might be just to line the pockets of Certicom. IOW, perhaps the explanation is more RICO than UFO. 24.177.120.138 (talk) 00:12, 12 June 2011 (UTC)
Pop-up pictures in Firefox
editWhen you click on an image, sometimes websites provide a larger version which pops up with a white border. Is there any way to make them appear in a full Firefox window instead? For example is there an about:config setting that will do this? Thanks 92.24.185.180 (talk) 21:38, 9 June 2011 (UTC)
- This is caused by javascript. You could disable javascript entirely in Firefoxs settings (Tools -> Options -> Content) or use an extension such as NoScript or Adblock Plus to selectively disable the javascript responsible for each individual site. You could probably also write a simple Greasemonkey script to force the images to load normally AvrillirvA (talk) 23:42, 9 June 2011 (UTC)
- Also sometimes simply right clicking the image and selecting "open in new tab" will work. AvrillirvA (talk) 10:20, 10 June 2011 (UTC)
- Wouldn't that be nice? Stupid lightbox nonsense. ¦ Reisio (talk) 06:59, 10 June 2011 (UTC)
The article linked to by Reisio had this http://www.huddletogether.com/projects/lightbox2/ as a reference, and that is what I was referring to. Is there any more specific advice about disabling it please? 2.97.219.191 (talk) 21:06, 10 June 2011 (UTC)