Wikipedia:Reference desk/Archives/Computing/2009 August 24
Computing desk | ||
---|---|---|
< August 23 | << Jul | August | Sep >> | August 25 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
August 24
editall website links
editI need to way to scan an entire website and list the url of every html page, image, and file in a plain text —Preceding unsigned comment added by 82.43.89.136 (talk) 08:13, 24 August 2009 (UTC)
- Sorry but the only way to do this is to manually visit and get the url for every page
Alex (talk) 11:54, 24 August 2009 (UTC)
- bullshit. sorry but web crawlers do exactly what I'm asking, they just download files instead of generate a list of links. I'm looking for something similar to a web crawler, that will scan the site and list every link, file and image. —Preceding unsigned comment added by 82.43.89.136 (talk) 12:52, 24 August 2009 (UTC)
- I imagine greasemonkey would allow you to do this, though you'd have to write a script to make it so. --Tagishsimon (talk) 12:55, 24 August 2009 (UTC)
- I have a greasemonkey script that can extract links from a single page, but that's not what I need. I need a program to scan an entire website, possibly hundreds of pages, and list every .html .jpg .exe etc etc link it finds. —Preceding unsigned comment added by 82.43.89.136 (talk) 13:08, 24 August 2009 (UTC)
- Use wget to get all .htm or .html files (with its "recursive" parameter), then process the files with a Perl script using regular expressions to get the names on all the links? (I don't know the details of this, but it should be possible to learn with some effort) Jørgen (talk) 14:06, 24 August 2009 (UTC)
- :( I was hoping for an easy way. I found this program called URL Extractor which does exactly what I want but it's limited on a free trial. I've searched for free open source alternatives but can't find anything. Ah well, thanks for trying :) —Preceding unsigned comment added by 82.43.89.136 (talk) 14:30, 24 August 2009 (UTC)
- Try Xenu - it has some save/report options that probably do what you want. Unilynx (talk) 21:26, 24 August 2009 (UTC)
- holy shit that's perfect, THANK YOU!
- Try Xenu - it has some save/report options that probably do what you want. Unilynx (talk) 21:26, 24 August 2009 (UTC)
- :( I was hoping for an easy way. I found this program called URL Extractor which does exactly what I want but it's limited on a free trial. I've searched for free open source alternatives but can't find anything. Ah well, thanks for trying :) —Preceding unsigned comment added by 82.43.89.136 (talk) 14:30, 24 August 2009 (UTC)
- Use wget to get all .htm or .html files (with its "recursive" parameter), then process the files with a Perl script using regular expressions to get the names on all the links? (I don't know the details of this, but it should be possible to learn with some effort) Jørgen (talk) 14:06, 24 August 2009 (UTC)
- With wget, you can just run wget -m --delete-after -nv http://yoursite.com and it'll do pretty much exactly what you want, although it'll download every file on the website, so it can take a lot of bandwidth/time. If you use this on a site that you don't own, it would be considerate to use a wait interval, like -w 10. This will take a lot more time, but cause less load on the server. Also, keep in mind that using recursion won't show you files which aren't connected to your starting point through some path of links. But other than that, it's an easy solution. Indeterminate (talk) 22:40, 24 August 2009 (UTC)
- You can tweak that command in various to make it faster and friendlier, like excluding everything but HTML files and so on. There are many reasons this won't get all links though, as web sites are very dynamic these days. --Sean 23:58, 24 August 2009 (UTC)
- I have a greasemonkey script that can extract links from a single page, but that's not what I need. I need a program to scan an entire website, possibly hundreds of pages, and list every .html .jpg .exe etc etc link it finds. —Preceding unsigned comment added by 82.43.89.136 (talk) 13:08, 24 August 2009 (UTC)
- I imagine greasemonkey would allow you to do this, though you'd have to write a script to make it so. --Tagishsimon (talk) 12:55, 24 August 2009 (UTC)
- bullshit. sorry but web crawlers do exactly what I'm asking, they just download files instead of generate a list of links. I'm looking for something similar to a web crawler, that will scan the site and list every link, file and image. —Preceding unsigned comment added by 82.43.89.136 (talk) 12:52, 24 August 2009 (UTC)
Using Nokia AD54 remote control as an extension cord
editI'm trying to use the Nokia AD54 remote control as an extension cord for my earphones on my laptop since the cords on my earphones are extremely short. It works well on my Nokia phone and on my stereo but on my laptop (a Dell Vostro) when fully pushed in gives something like a poorly designed vocal removal effect, which I suspect is because it's sending signal L-R to both channels of my earphones. If I pull it out a bit I get left channel on both earphones. The connector on the remote is a standard 3.5mm TRS connector with an extra ring above the right channel. Does anyone know why does it not work on my laptop while it works on my stereo and desktop? Is there anything I can do to fix it? --antilivedT | C | G 11:13, 24 August 2009 (UTC)
problem in Excel: left and right buttons navigate bar instead of going from one cell to another
editHello there, everyone:
I have a problem with Excel which is small but considerably annoying. I rely on using the left and right buttons to go from one Cell to another, as it makes putting in a bunch of data much easier than constantly clicking. Suddenly, it stopped doing this and now it only goes moves the navigation bar without going from one cell to another - does anyone know how to fix it?
All the best —Preceding unsigned comment added by 81.202.202.14 (talk) 12:50, 24 August 2009 (UTC)
- This is usually because you have your Scroll Lock on. ny156uk (talk) 13:27, 24 August 2009 (UTC)
Anyone an expert on DOS game history?
editPlease see Wikipedia:Reference_desk/Entertainment#Game 83.100.250.79 (talk) 14:38, 24 August 2009 (UTC)
Unblocking a downloaded file in Windows Vista
editI have downloaded an executable file from the Internet, and I am positive that it is not malicious. However, Windows Vista has blocked it, so I have to confirm the execution of the program, everytime I try to run it. This is very annoying. And worse yet: In the file properties dialog box, there is a "Unblock" button, but it does not work! (Apparently this is a bug in Windows - even if Microsoft for some reason really do not want me to be able to unblock the application, the button should not be enabled if it has no effect. I have SP2.) I have tried to run "explorer.exe" as administrator, and opened the file properties dialog from there, but that did not work either. How to unblock the file? --Andreas Rejbrand (talk) 18:42, 24 August 2009 (UTC)
- Apparently the zone information about where the executable came from is kept in an alternate data stream associated with the file. One way to supposedly clear up the problem is to download this command-line program. If you put streams.exe in the same folder as your executable, then open a command prompt, cd to that folder and type "streams -d (yourexecutablefilename).exe", it'll delete the associated data streams. Hopefully that should clear it up. If you want to disable the feature altogether, try option 3 on this page to disable it through local policy. Indeterminate (talk) 22:13, 24 August 2009 (UTC)
- He can also do it an easy way and right click go to properties and there should be an option to unlock it. Rgoodermote 06:54, 25 August 2009 (UTC)
- If you had read my original post three paragraphs above, you would have noticed that this was the first thing I tried, but it didn't work! :) --Andreas Rejbrand (talk) 11:52, 25 August 2009 (UTC)
- Thank you very much, Indeterminate! It really worked! --Andreas Rejbrand (talk) 11:56, 25 August 2009 (UTC)
Why does the fan stay on after switching off?
editI have read that the leading contributor to eventual failutre of electronic components is the constant warming up and cooling down, and that it is better for the components to warm up slowly and cool down slowly. Why then, would a fan continue turning (both my laptop and PC PSU do this) after the power goes off? Surely this is unecessary at best and at worst, damaging by cooling component down more quickly that would otherwise be the case? ----Seans Potato Business 18:54, 24 August 2009 (UTC)
- If you have a hot object, that's being continually kept cool-ish with a fan, and you turn off the power to both it and the fan at once, it will be at the same temperature (for a while) but without the fan cooling it will stew in its own heat. Keeping the fan on for a while after lets its temperature coast down to ambient at a more gradual pace. Remember that you're not really "cooling" something with a fan, just controlling the extent to which it heats its environment. -- Finlay McWalter • Talk 19:47, 24 August 2009 (UTC)
- Also, while thermal-cycling is probably a leading cause of failure for certain components, (e.g. wirebonds with different thermal expansion coefficients); other components (like semiconductors) are probably more sensitive to number-of-hours-exposed-at-temperature. Migration of dopant is proportional to number of hours endured at high-temperature. Failure analysis is a tough problem; letting the fan run seems like an "engineering approximation" to the optimal thermal profile, given the constraint that it's hard to really estimate the likely cause of failure, and even harder to actually control the temperature in an ideal way. Nimur (talk) 21:23, 24 August 2009 (UTC)
- FWIW integrated circuits don't fail because of doapnt migration - much higher temperatures are needed for that that you will ever get running hot (typically 800 to 1200 degrees celcius). They fail for a variety of other reasons, but one classic failure mode was "electromigration", where aluminium atoms are pushed along by electrons and we end up with voids in the conductor tracks. That, like most other failure modes, is temperature sensitive. Of course, if you get the chip hot enough, the aluminium will just melt :-) --Phil Holmes (talk) 08:31, 25 August 2009 (UTC)
- I have never seen a computer PSU keep the fan running after turning the power off. Slide projectors do let you leave the fan running after the lamp is turned off, and a myth evolved that cooling the bulb down faster (by running the fan) makes it last longer. The real reason for that feature is that if a bulb burns out, you want to cool it off quickly so you can change it without burning your fingers, minimizing the length of interruption to your slide show. 70.90.174.101 (talk) 07:52, 25 August 2009 (UTC)
- I own a "Be Quiet 1200W Dark Power Pro Modular" PSU and it does this by design. There are 4 special ports designed specifically for fans to be connected to them and they keep those 4 fans spinning for 2 minutes after the machine has powered down. I don't have the manual to hand, but the reason was literally something like because it keeps cooling the components down rather than just leaving them hot like a normal PSU would. ZX81 talk 12:30, 25 August 2009 (UTC)