Wikipedia:Reference desk/Archives/Computing/2010 April 20

Computing desk
< April 19 << Mar | April | May >> April 21 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 20

edit

Laptop hard drives

edit

Most laptop hard drives these days are 9.5mm thick and have two platters. Some (that only fit larger laptops) are 12.5mm thick and have three platters, giving higher total capacity. But last year Samsung made a 3-platter drive that was 9.5mm thick. Why isn't that done more often? Are they less reliable? 66.127.54.238 (talk) 02:54, 20 April 2010 (UTC)[reply]

It might not be cost-effective enough to bother with it- maybe a 12.5mm thick three platter drive is just cheaper to produce rather than investing the money to research ways to make three platter hard drives thinner. Chevymontecarlo. 11:47, 20 April 2010 (UTC)[reply]
But the research has already been done, in the case of that Samsung drive. And the market for 2.5" drives thicker than 9.5mm is quite limited. 66.127.53.162 (talk) 16:54, 20 April 2010 (UTC)[reply]
A.) Possibly Samsung patented their process, which would probably require royalties; B.) Just because the research has been done doesn't mean that the item was profitable or well-received; etc., etc. Market forces are fun things. Just because it has been done before does not mean it will be oft-repeated just because. Washii (talk) 04:13, 23 April 2010 (UTC)[reply]

Regular expressions to manipulate a text file

edit

Hello all. I was hoping one of you could help me out with something: I'm editing a number of long, long lists with Notepad++. Notepad++ supports regular expressions for the find and replace feature, and I know there must be some way to say "erase every line that doesn't have 197 in it", but despite reading tutorials, I can't seem to work it out. It'd be really helpful if someone could tell me what regular expression I could use to do that. Thank you kindly. 202.10.86.236 (talk) 05:09, 20 April 2010 (UTC)[reply]

Regular expressions are not normally used to locate something that is not in a string of text. To do so is theoretically simple. You create a regular expression that looks for what you want and then negate it. However, negating a NFA has a tendency to create a monstrous and confusing mess. Due to your mention of Notepad++, I assume you are using windows and won't suggest grep or sed to do this task quickly on the command line. -- kainaw 06:02, 20 April 2010 (UTC)[reply]
findstr is available on Windows on the command prompt; it clones much of the basic functionality of grep, but uses a different (Windows-style, and generally simpler) regular expression syntax. Official documentation is available here. The command:
findstr 197 inputfile.txt
...will only print lines that contain 197. The command:
findstr /v 197 inputfile.txt
...will display all lines that don't have 197. If you want to dump this to an output file, you can use standard out:
findstr /v 197 inputfile.txt > outputfile.txt
Nimur (talk) 06:37, 20 April 2010 (UTC)[reply]
Nimur's got a good idea except don't use the /v tag (my mistake earlier). I had no idea that program existed, let alone was built-in.
Here's my now irrelevant rephrasing: This appears to indicate notepad++ doesn't support negative lookback: [1].
Really what you're asking is to display everyline that does have 197 in it. Using something like grep, you could use the command "grep 197", which would display all lines that have 197. But you probably don't have grep on a windows machine.
The inverse, to remove all lines that don't have it, is impossible (I think) using regex from the information we had. I don't think it's possible to match an arbitrary string that doesn't contain a substring, with regex alone (correct me if I'm wrong someone; this is mostly a half-hunch). On the other hand, if we had some idea of where 197 would be in the string, or even how long the string was, or if 197 always in a certain pattern in the line, it might be possible (although depending on how predictable, might still require negative lookback). Shadowjams (talk) 06:39, 20 April 2010 (UTC)[reply]
As a workaround, you can do it in multiple steps: replace ^ with A (adding an A to the start of all lines), ^A(.*197) with B\1 (changing the A to a B in lines containing a 197), ^A.*$ as empty (erasing lines that still start with A), and finally ^B as empty (removing the B from the remaining lines). —Korath (Talk) 06:40, 20 April 2010 (UTC)[reply]
Actually it'd need to be ^., but that's a brilliant solution to get around the problem of arbitrary string length. Shadowjams (talk) 06:43, 20 April 2010 (UTC)[reply]
Regexp#Expressive_power_and_compactness indicates that complement is expressible in terms of other symbols, however doing so can be computationally expensive. So Korath's method is quite possibly the best. Taemyr (talk) 14:38, 20 April 2010 (UTC)[reply]

E-mail account not retrieving mail

edit

I signed up for an account on a music website that uses WordPress, they're supposed to email me my password, so I went into my email account to retrieve it, but I can't find it. I even disabled my spam filter, still nothing. And I am absolutely positive that I typed my address properly. What is the problem here? 24.189.90.68 (talk) 06:06, 20 April 2010 (UTC)[reply]

Apart from human mistakes there are possible machine problems such as disk too full, or server down. For large email senders, they may give up very easily and silently. It may be a good idea to try again. Email is not guaranteed. Graeme Bartlett (talk) 11:43, 20 April 2010 (UTC)[reply]

Is Facebook or wikipedia an example of cloud computing?

edit

According to the article, cloud computing is computing whereby data is provided to a computer on-demand. So I would imagine FB and wiki are examples of cloud computing since files are stored on the wiki and FB servers and then downloaded when needed. Have I got that right? Isn't cloud computing just another name for a computer network? I don't know much about computers so I'm a bit confused. ExitRight (talk) 06:49, 20 April 2010 (UTC)[reply]

In theory this should be covered in our cloud computing article, but it looks like you've looked there. The definition in use is smaller. I would say that it covers areas that have traditionally been done on desktop computers and implies certain degrees of private information, and personalized processing power. Otherwise the definition, as you point out, applies to any networked application. That's certainly not the emphasis used. Really it's more of a buzzword, just like Web 2.0 or something similar. It doesn't have a precise definition. Shadowjams (talk) 06:55, 20 April 2010 (UTC)[reply]
Thanks. I was getting that feeling that there was no agreed definition bacause I couldn't quite figure out if WP and FB were examples cloud computing or not after reading stuff from all over the web.ExitRight (talk) 22:31, 21 April 2010 (UTC)[reply]

That article is terrible. Cloud computing basically means automating the setup, acquisition, and release of rented servers at a big provider. For example, with traditional web hosting, if you want a dedicated server, you'd fill out purchase orders, deal with sales people, spend days waiting for the provider to set up your server, and you'd pay for it by the month. With cloud computing, you can click a few things and have a server within an hour with no human intervention, and pay for it by the hour (about 40 cents an hour for a midsized Amazon EC2 instance as of last year, I think). That means you might have your web site on a small setup, but then when it's slammed by heavy traffic because it's featured on a TV show, you can quickly another ten servers to handle the load, then release them a day later when the load subsides, all at low cost. Or if you have a big compute task (fulltext indexing a newspaper archive, say) that might take a year on a desktop computer, you can instead rent 100 computers in a cloud and do the job in a few days (the New York Times did a big index job that way on EC2, if I remember right). The technological aspects are in automatic provisioning, virtualization to run several customer instances on a single large multi-cpu machine, etc. There are some ugly aspects (mentioned in the article) that your provider now has its hands on your data to a greater extent than they would with colo, etc. I'd say facebook and WP are not cloud computing, they're just big websites, although FB might use cloud-like provisioning inside the company (I know Google does that). 66.127.54.150 (talk) 09:40, 20 April 2010 (UTC)[reply]

Hello 66.127. So if I take the definition that cloud computing is automatic setup, acquisition, and release of rented servers, then the reason why FB and WP are not cloud computing is that end users don't rent the servers? Sorry if it's a silly question, I just want to be sure of telling the difference based on your definition. ExitRight (talk) 22:31, 21 April 2010 (UTC)[reply]
That is the definition I'm used to, but as Shadowjams says, it's become such a buzzword that other people are using it to mean anything they want it to mean. I guess another view of cloud computing is anytime you upload your data to someone else's server over the internet, and run computational tasks on it that you traditionally would have run on your own computer. By that definition, Google Spreadsheets would be a canonical example. You don't know where your data actually is; it's just someplace in the "cloud". 66.127.53.162 (talk) 22:39, 22 April 2010 (UTC)[reply]
Thanks for your help. ExitRight (talk) 05:39, 23 April 2010 (UTC)[reply]

http://www
This seems excessive and a waste of ink. Could it easily be shortened across the web? Kittybrewster 08:43, 20 April 2010 (UTC)[reply]

If I understand what you're saying correctly, there's no reason why you can't either run your website or set up some sort of forwarding or even simply a CNAME so that people can only enter the domain name without the www subdomain. In fact, the vast majority of websites nowadays do do that. E.g. http://bing.com, http://google.co.nz, http://microsoft.com, http://govt.nz, http://gov.my, http://whitehouse.gov, http://ebay.com, http://dealextreme.com, http://jaring.my, http://slingshot.co.nz, http://bbc.co.uk, http://nzherald.co.nz, http://thestar.my ... Occasionally of course the they may point to different places.
In terms of the http:// to identify the protocol, most web browsers will automatically assume you mean http:// if you don't enter a protocol. However in print, it may or may not be obvious you're referring to a website depending on the context and other things. Some people may automatically put in the http in any case (since if it's print you'll usually be typing it out). In computer programs, not everything will automatically assume something without the protocol is a http website, wikipedia is one, e.g. gov.my (compare to http://gov.my as above), also Live Messenger and many e-mail programs and there are plenty of reasons why that makes sense.This (sic) is perhaps one example where it could easily be misinterpeted as a website if you use such programming. Only looking for things which specify the protocol is also likely easier to identify and easier to program. And of course there are other protocols that the server behind an address/URL could be running, https perhaps the most common and most relevant here but also Telnet, IRC, FTP, IMAP, SMTP, NNTP, XMPP, ED2K servers, torrent trackers, VNC, SSH, RDP... used in various circumstances and programs.
Nil Einne (talk) 09:31, 20 April 2010 (UTC)[reply]
I once attended a lecture by Tim Berners-Lee where he said (not entirely in jest, I think) that the length and complexity of the http:// prefix was one of his main regrets about how the WWW had been created. AndrewWTaylor (talk) 10:36, 20 April 2010 (UTC)[reply]
On the other hand, some programs automatically convert text to clickable links when they recognize an internet address on screen; "http://" is very easy and unambiguous to recognize, unlike say ".com", which could be an old ms-dos executable file extension. 195.35.160.133 (talk) 12:15, 20 April 2010 (UTC) Martin.[reply]
Keep in mind that the internet is a shared resource. You are mostly interested in human-readable world-wide web pages - that is, text, images, and multimedia. Those have historically have been delivered over a computer communication protocol, whose technical specification is called hypertext transfer protocol. For convenience, the world wide web was coined to refer to any set of computers who would speak that language. But other people use computers for other tasks: file transfer protocol, remote shell, bittorrent, Message Passing Interface, and so on. We need a way to universally identify what computer we are talking to; and we need a way to tell the computer what language we will speak to it. So, for your purposes, because you only care about hypertext that a web-browser would understand, "http://www.<servername>" seems redundant. But for the rest of us, who might connect to seven or ten different services on the same computer, it's absolutely essential that we can uniquely identify the machine and the language so that we don't garble up our messages. Nimur (talk) 14:12, 20 April 2010 (UTC)[reply]
1) The "http://" part isn't much of a problem, since all software seems to fill that in for you automatically. However, there is an "https://" prefix, denoting a secured connection, so that does need to be specified.
2) The "www" is also automatically filled in you omit it, almost everywhere. It's also easy to type. My objection is how hard it is to say, being a 9 syllable abbreviation of a 3 syllable term. StuRat (talk) 14:18, 20 April 2010 (UTC)[reply]
There was a time when the Internet and DNS existed and the Web did not. For years it was considered important to specify which computers were Web servers, because at the beginning the vast majority were not. (Although the very first web server did not start with www.) Comet Tuttle (talk) 18:12, 20 April 2010 (UTC)[reply]
Actually AFAIK the www will rarely be filled in practice. In most cases, the browser will first try without the www and it will work. As I mentioned above, this could be because of a CNAME. But alternatively and probably more likely A record of some type either a server that redirects to the www; or the same/a different server which serves the same content without any redirection (it isn't necessary). In a few rare cases there may be an A record but with different servers and content or the same server but it will serve you something different depending on what URL you're visiting it from i.e. a form of Shared web hosting service for the www and URL without the www subdomain.
Having said that, you do have a point that I missed in that if the browser finds no web server behind the non www version (either no A/AAAA or CNAME at all, or it has something but it doesn't work), it will usually try the www IIRC, so even in the rare cases when there is nothing, not filling in the www shouldn't cause problems.
If a CNAME or non-transparent redirection is used (I suspect a 301 Redirect is probably most common), it may seem as if the browser is filling it in, but in most situations, this is not the case (the web browser learns from the server it should be going to the www when it tries visitng the URL without the www). Of course in some cases the non www may be considered the 'proper' URL (if the www redirects to the non-subdomain or if the www is a CNAME to the non-subdomain) and in some cases there may simply be no such thing as a 'proper' URL (if there's no redirection or CNAMEs and they both serve the same content and always use relative URLs then both are basically equivalent, however this is probably rare, many may have additional subdomains which will then usually point back to the main website in some way and probably many forget to use relative URLs in some cases).
Nil Einne (talk) 22:17, 20 April 2010 (UTC)[reply]
It is not always necessary from a technical point of view, but from a stylistic point of view, they should always be included. While it is true that most readers will deduce from the fact that a URL ends with a .com, .edu, etc., that it is a URL, good writers never make readers guess, or even pause, while reading. Further, many sites use sub-domains (e.g., <http://images.google.com>). Including a www clarifies that the domain is for a web page (i.e., it is on the world-wide web). It is also a good idea to underline any URLs and enclose them in angle brackets, like this: <http://www.google.com>. The brackets allow you to place punction immediately after the URL without running the risk that any punctuation is mistakenly included in the URL. If the URL can be clicked, then it should of course be blue. Certain programs make URLs you type inactive, preventing readers from clicking on them. Adding extra clues to the reader that it is a URL will help prevent confusion in such situations.--Best Dog Ever (talk) 23:05, 21 April 2010 (UTC)[reply]

PDF exploits -- is there a non-vulnerable reader?

edit

It hasn't gotten an awful lot of attention, but I understand that it is sometimes possible to send malware in PDF files. I was wondering if there is any safe way to read a PDF file from an untrusted source. (I don't find virus scanning a satisfactory solution — PDF is just something that shouldn't be dangerous, period.) For example, can an exploit work from xpdf? I suppose I could make a sandbox with chroot or something, but that's an awful lot of trouble. --Trovatore (talk) 10:40, 20 April 2010 (UTC)[reply]

It's like text documents I think - the malware is sent via a macro. As to your program question, I don't think there is one, hackers would get round any security measures that have been put in place. Certainly one of the most popular PDF viewers, Adobe Reader, will definately be a vunerable reader, just because of the sheer number of users. Chevymontecarlo. 11:44, 20 April 2010 (UTC)[reply]
As far as I know, at least some of the holes are going to be in most or all complete viewers, as they're part of the PDF specification itself. 131.111.248.99 (talk) 12:44, 20 April 2010 (UTC)[reply]
Come on guys, these are sloppy responses; let's provide references. Here is an article about a recent exploit from yesterday, along with a mention of the workaround. Comet Tuttle (talk) 18:04, 20 April 2010 (UTC)[reply]
That's a workaround for Adobe Reader. To be honest I'm more interested in xpdf. Does xpdf have any known vulnerabilities? Or, has anyone made a fork of xpdf that simply doesn't implement the dangerous (and in my estimation fairly useless) aspects of the PDF standard? --Trovatore (talk) 21:58, 20 April 2010 (UTC)[reply]
  Resolved

I asked a question about this a few months about but I can't find it in the archives. It was about why running DOS in a virtual machine always used 50% of the cpu of the host, whereas Windows NT, XP, 2000 etc only used that cpu they needed. People answered that it was because nt windows do something different with the way they idle the cpu. I can't remember it exactly. Someone also posted a awesome link to a DOS program that would let DOS manage the cpu properly instead of always using 50%. Does anyone happen to know what program that might have been? Thanks 82.43.89.71 (talk) 13:44, 20 April 2010 (UTC)[reply]

Your original question is still in the archives (see Archive, Computing, Jan. 21). I found it by typing:
dos 50%
in the search box above. 195.35.160.133 (talk) 14:00, 20 April 2010 (UTC) Martin.[reply]
Here's a link: your archived original question. 195.35.160.133 (talk) 14:05, 20 April 2010 (UTC) Martin.[reply]
(ec)Here's the link: Wikipedia:Reference_desk/Archives/Computing/2010_January_21#Why_do_virtual_machines_peg_the_CPU.3F. StuRat (talk) 14:05, 20 April 2010 (UTC)[reply]

Awesome! Thanks 82.43.89.71 (talk) 14:16, 20 April 2010 (UTC)[reply]

posting information

edit

Could I save something onto a CD, memory card, or some other data staorage device and post it in an envelope, or would it get broken on the way?

148.197.114.158 (talk) 13:47, 20 April 2010 (UTC)[reply]

Well, Netflix does send DVDs (basically the same thing as CDs) through the mail, without the cases needed to protect them (because this reduces postage), but they do often get broken. A memory card is a bit tougher, especially if the case is included, but a USB flash drive/pen drive/stick drive is probably the toughest of the three, especially if wrapped in bubble-wrap. However, they are more expensive. So, if getting the data there on time is critical, go with the USB flash drive. If doing it as cheaply as possible is important, then mail the CD. StuRat (talk) 13:55, 20 April 2010 (UTC)[reply]
A CD is very easy to send, even if you don't opt for the specialized CD mailers sold in US post offices and in office supply stores. Simply put the CD in a case and put a piece of cardboard in with it. It will probably not be a problem. I have sent many CDs and DVDs in the mail (and received even more as a Netflix member) and never had one break to my knowledge. I would imagine that other data storage would be similar. If it is something that would break from having something heavy put on it, wrap it in bubble wrap, or put it in a box. If you're asking about whether something could electronically zap the information off... I'm not sure. CDs, definitely not, unless it gets put in extreme heat or something odd like that. Flash memory, I don't know, but I think is fine from any x-raying that might be done. --Mr.98 (talk) 15:17, 20 April 2010 (UTC)[reply]
For cd's you can get lightweight cardboard mailers from ebay or maybe meritline. They are very easy to use and have worked fine whenever I've used them. They are intended for one cd but hold two easily, and you can cram in three if you try. For usb sticks, use a bubble-pack envelope. Memory cards, use a case inside bubble pack. One thing you can probably do though with micro-sd cards (these are less than 1mm thick) is just tape them down on an ordinary letter. 66.127.53.162 (talk) 16:52, 20 April 2010 (UTC)[reply]
Several times I have sent CDs through the mail in either an oversized standard envelope or a very slightly padded envelope with no problem.
I also probably get half a dozen disks from Netflix a month and have only had three broken in the last five years. They ship in ordinary paper envelopes. I wouldn't count on Netflix's results as being typical though. They get special treatment from the post office. APL (talk) 22:09, 20 April 2010 (UTC)[reply]

Virtual Harvard architecture and the C language

edit

If an operating system kernel used a different virtual address space for jump instructions than for all others, how would the C language standard have to change to accommodate this? NeonMerlin 15:09, 20 April 2010 (UTC)[reply]

The Modified Harvard architecture article points you to "Data in Program Space" (about the Atmel AVR and related architectures). Looking at that it seems they confine pointers to mean solely the data space, they have a compiler extension that forces a "data" declaration into code space, and they have macros (that probably resolve to asm) to read and write codespace. And it looks like they keep const data in dataspace (which is hardly a shock). Function pointers should work as normal, providing you just pass them around like magic cookies and don't dereference them other than by calling (but then how often, really, do you ever dereference a function pointer and go poking around in it anyway). Stuff like trampolines, debuggers, jits, and the toolchain will obviously have to be aware of how things work, but that doesn't impinge on the language at all. -- Finlay McWalterTalk 16:26, 20 April 2010 (UTC)[reply]
As an alternative, if you really needed to properly address codespace, you could add an additional type specifier to pointers, which clarified which kind of pointers they are:
    uint8_t * __code__ program_ptr;
    uint8_t * __data__ foo_ptr; // you'd probably have __data__ implicit, as most pointers will be data
The only caveats are that __code__ and __data__ pointers aren't assignment (and thus arithmetically) compatible - so you couldn't add or compare them (and the compiler should barf if you try). Function pointers are just a typed kind of __code__ pointer, and unlike the above scheme can be derefenced (although again you'd really not want to, mostly). The type incompatibility complicates the signatures of standard library functions, but in practice if you had four versions of memcpy (d-d, c-c, d-c, c-d) and maybe strcpy you'd have most of what people would actually need. -- Finlay McWalterTalk 16:39, 20 April 2010 (UTC)[reply]
I assume, incidentally, that when you say "a different virtual address space for jump instructions" you include conditional jumps (branches). -- Finlay McWalterTalk 16:43, 20 April 2010 (UTC)[reply]
Would changes to the standard actually be required? There's a gcc warning message that claims "ISO C forbids conversion of object pointer to function pointer type". -- Coneslayer (talk) 16:51, 20 April 2010 (UTC)[reply]
(edit conflict)And if we take your question literally, if there is no means to address codespace at all (i.e. there are no instructions to access it) then there shouldn't need to be any change to the C language. At this point if you dereference a function pointer, or wrote to it, you'd actually be writing to an unrelated piece of dataspace instead; but I really doubt that the C standard has anything worthwhile to say about what happens when you do either of those things. By the same token you couldn't meaningfully cast a function pointer to an ordinary data pointer or vice versa, but again whyever would you, and what really would a platform-independent language standard have to say about what happens if you did. -- Finlay McWalterTalk 16:55, 20 April 2010 (UTC)[reply]
Conversion between function pointers and data (object) pointers has always been forbidden by the C standard because of the existence of architectures that separate code and data. (Another example of which, by the way, is 16-bit DOS and Windows in every model except Tiny.) The standard does appear to allow casting via an intermediate integral type, like (dataptr_t)(long)funptr, but it's not required to do anything remotely sensible. There's not even a requirement that casting back to the original type will get you the original pointer. Incidentally, it's not legal to cast function pointers to void* and back either (and it won't work properly in some 8086 memory models). -- BenRG (talk) 18:21, 20 April 2010 (UTC)[reply]
I wasn't actually thinking of a data-to-code memcpy (at least, not one accessible in user space), since it would defeat the security purpose of keeping the two address spaces separate -- namely, preventing programs from jumping into memory whose contents aren't marked as trusted executable code. NeonMerlin 05:00, 21 April 2010 (UTC)[reply]

Lag/Ping/Latency, Online Gaming, & Graphics Settings

edit

I've seen it written here and there that there are gamers who rack their graphics settings way up to max just in order to see all the eyecandy on a game, and that by doing so, they cause their own computers lag and also the entire game for everyone else lags. I would like to verify this, and find out why/how this happens. --KägeTorä - (影虎) (TALK) 18:54, 20 April 2010 (UTC)[reply]

This could be the case when a player is hosting a game, but most online games now have a dedicated server with players connecting as clients. With the dedicated server setup, any lag for one player should not cause lag with other players. In first person shooters, this lag often gives a disadvantage to the slow player. This is a hard question to answer in general because I know of games where lag causes problems for all players, just the lagging player, or all players except the lagging player. Usually graphics settings will not cause network lag for anyone, but the frame rate may suffer if a player's computer can't handle the settings. Caltsar (talk) 19:13, 20 April 2010 (UTC)[reply]

Arrays

edit

How do you divide an array for processing between two computers? 71.100.1.71 (talk) 19:16, 20 April 2010 (UTC) [reply]

You need to be more specific:
1) What's in the arrays ?
2) What type of processing do they require ?
3) Are these two cores on the same PC, two PCs connected over the Internet, or what ?
For example, let's say the arrays contain text strings of people's names and you want to sort them. Use either computer to divide the array into two arrays, maybe one starting with A-M and another with N-Z, then send them to each computer for sorting further. StuRat (talk) 19:36, 20 April 2010 (UTC)[reply]
while the contents of a list may be very easy to divide between two computers consider contents of an array with more than one row or one column. For example, lets say we are processing a FAQ which is essentially broken down into a list of questions and the words which they contain where the words occupy the cells of the array and the objective is to sort the words in the order which will minimize the number of queries necessary to cover all of the questions, otherwise know as the process of optimal classification. My question regards two or more computers connected via the Internet to handle the FAQ when it grows too large for one computer to handle the size of the array. 71.100.1.71 (talk) 20:16, 20 April 2010 (UTC)[reply]
So what exactly is the limiting factor for using a single computer ? Disk space can't be the problem. Memory could be, but even 1 MB of memory could handle maybe a thousand questions, at 1000 characters each, so that doesn't seem to be an issue. Is it just processing speed, then ? StuRat (talk) 22:29, 20 April 2010 (UTC)[reply]
This is just an example with a maximum number of possible questions made up of variuos combinations of words in the dictionary.
If you use words in place of questions and letters in place of the words then the array gets smaller due first to the elimination of repeated words and second to the letter permutations which is used for no word at all.
Even an older, slower, smaller PC can be used to hold and process an array to find unused letter permutations and try to make up new words by applying the rules of English (or whatever your language happens to be).
Move in the opposite direction by adding answers to the questions and a larger array is required, however, a better example of the need for a larger array can be found in the case of multiple state logical equation reduction (an online version of which can be played with here).
Suppose I have an electronic sensor array on which I want to perform the process of logical equation reduction to find the causes of a glitch. The array must be squared before processing can begin. The size of the array after it is squared, however, makes it larger than one computer can hold.
Consequently, I have to divide the processing between two computers, so how do I divide the array? 71.100.1.71 (talk) 01:08, 21 April 2010 (UTC)[reply]
Again, does "larger than one computer can hold" mean memory ? If so, just use paging space/virtual memory on the same computer. Yes, that will slow things down, but so will trying to do distributed computing. StuRat (talk) 04:57, 21 April 2010 (UTC)[reply]
My operating system already transfers the part of an array that is too big for memory to my hard drive. I just need an operating system that can then tap other hard drive in my computer and any hard drive that are on computers which I am connected to over a LAN or WAN network. My application program would still be working with the virtual array and not concern itself with where the cells actually were. But I don't have such an operating system. and do not know where to get one. 71.100.1.71 (talk) 14:46, 21 April 2010 (UTC)[reply]
OK, finally we know that it's lack of disk space on one computer you're talking about. You can have many huge hard disks on a single computer, each up to 2 TB in size each. I'm not sure if any O/S will use that much paging space or allow paging space to go beyond one hard disk, though, so you may need to manually store one file, then read in another and process it, until you go through all the data. Putting those hard drives on different computers would just needlessly complicate things. The only justification I can see for distributed computing is to decrease processing time. StuRat (talk) 16:13, 21 April 2010 (UTC)[reply]
I see now that the best way is to divide the array not in half but into cells with virtual and a physical address. The virtual address determines the order in which each cell is executed, filled, viewed, etc. relative to other cells while the physical address determines where the cell contents is physically located or which computer, hard drive stores the cell and from that which CPU executes the cell's script. 71.100.1.71 (talk) 22:12, 21 April 2010 (UTC)[reply]
And an even more practical application of your query is: How do I sort, using all the cores on my multi-core processor computer? I think Quicksort is quite suitable for this. Comet Tuttle (talk) 22:19, 20 April 2010 (UTC)[reply]
I'd think a bucket sort would be in order there, as you could quickly subdivide the problem into 4 and easily combine the results back together, once done. StuRat (talk) 22:21, 20 April 2010 (UTC)[reply]
Instead of "combine", use the synonym "merge". You want a divide-and-conquer algorithm when using multiple separated processors. Merge sort is good for that. The problem with multiple processors is the intercommunication involved. With a multi-core processor, you can avoid the communication and take advantage of all of the cores - well, at least 2 of them. Of course, there will never be agreement about which sort routine is the "best" for any given application. -- kainaw 05:01, 21 April 2010 (UTC)[reply]
See parallel computing and thread (computer science). IIRC, handling large arrays is a typical example when you learn about multi-threaded programming. Astronaut (talk) 15:39, 21 April 2010 (UTC)[reply]
This begins to sound like you are trying to program a search engine. You might look at the articles (here and elsewhere) about MapReduce and Hadoop. The book Information Retrieval discusses search implementation at length. It is gratis online and quite good. 66.127.53.162 (talk) 20:31, 21 April 2010 (UTC)[reply]
Not a search engine, at least not one to do more than find vacant places of memory large enough to hold one of the array's cells with an available CPU, communications etc. The program is already working online here as stated above. It starts by generating an array that is the size of the number of variables raised to the power of the number of variables and squared i.e. s^v^2. As you can see it does not take many states or variables to use up a whole hard drive. The way I have decided now to handle this is by cell rather than by groups of cells and to make each cell relatively independent like a packet. They still have to interact, etc. so I have decided to set them up like spreadsheet cells in which each cell has attributes, properties, events, methods and the like that are all together in its packet that includes its virtual and its physical address. In this way all that happens to a completed array is that it beaks itself up and each cell then starts looking for places to be stored and a CPU to process its scripts until the results have been compiled in the results cell and returned to the user. The completed cells then evaporate returning the memory they have occupied to their host system. 71.100.1.71 (talk) 02:08, 22 April 2010 (UTC)[reply]
You might want to rethink your method, as that s formula will quickly take up any disk space you can make available, and the time it takes to transfer all that data will ensure that the program runs at a crawl. Also, what range of values do you have in mind for "s" and "v" ? StuRat (talk) 13:15, 22 April 2010 (UTC)[reply]
Well just the game of tic tac toe puts s at 3 and v at 9, which produces an array that exceeds that 64k array limit for VB v6. (BTW, is there a workaround for VB?). If you think in terms of each cell being independent with is own instructions and data and a place to sent its result then my question is answered by dividing the array into intelligent cells rather than in halfs or thirds, etc. and sending each cell on its way to find available memory and CPU time wherever it might be found. In fact if server operating system services offered a rudimentary calculator service like they do a character generator or time service or quote service then that might be all that was needed for the cell packet to arrive at a result to send back to the array. Of course this only make sense for an array with trillions of trillions of cells. 71.100.1.71 (talk) 16:29, 22 April 2010 (UTC)[reply]

Can't scroll .svgs

edit

When I open up a large .svg image, such as is commonly used on Wikipedia, I'm unable to scroll it, and can therefore only see the upper left part of the image. Any ideas? I'm using Safari in OS X 10.4. --Lazar Taxon (talk) 19:19, 20 April 2010 (UTC)[reply]

Have you tried all these methods of scrolling ?:
1) Use the scroll bars at the right side and bottom, if there are any.
2) Use the arrow keys (possibly with SHIFT, CONTROL, or ALT).
3) Depress the mouse wheel until you get the scroll symbol, then move the mouse around to scroll, then depress the mouse wheel again to get out of scrolling mode.
4) Use PG UP and PG DN buttons (obviously only good for scrolling up and down).
Also, make sure you're set to max screen resolution under Start + (Settings) + Control Panel + Display + Settings Tab. The slider should be all the way to the right. StuRat (talk) 19:29, 20 April 2010 (UTC)[reply]
From the PC world, I'd suggest that if you happen to have a weird keyboard with a "Scroll Lock" button, tap it and try again? Comet Tuttle (talk) 22:14, 20 April 2010 (UTC)[reply]
It seems to be a bug in Safari - I have the same issue as you on Safari 4.0.5 on Windows 7. Safari's WebKit cousin Chrome also had the same problem - the bug info is here; at least in Chrome it was fixed relatively recently (it doesn't happen for me in Chrome 5.0.375.9 dev on Linux). -- Finlay McWalterTalk 23:07, 20 April 2010 (UTC)[reply]
Safari kind of sucks with SVG support. They'll load up but you don't have much control over them; they don't scroll, they don't enlarge correctly. It is lame and will probably be fixed at some point. --Mr.98 (talk) 02:02, 21 April 2010 (UTC)[reply]

Laptop hardware upgrade or new laptop

edit

As a follow up on this previous question, I sent an email to the software creators and got this reply:


After thinking about it some, I decided not to install the trial as I figured that what SteveBaker said was probably correct about the program running, but excruciatingly slowly. I came to this conclusion after this program ran excruciatingly slowly on my computer, often to the point where the program crashed. Given this and the fact that storm season is rapidly approaching, I really think I need to either upgrade my laptop's hardware or purchase a completely new laptop. (I want to take this program mobile, rather than have it stuck in one location.) This leads me to have a few questions:

  1. Is upgrading my laptop possible in any way, and if so, how much would it cost and what hardware would I need to look at upgrading, as well as what would I look at upgrading it to?
  2. If upgrading my laptop is impossible technically or unfeasible economically, what hardware would I need to look for in a new laptop to be assured the laptop would be up to scratch with the program requirements?
  3. How much would such a laptop cost? As a high school student, my money is rather limited, but I could probably scrounge up around $800-1000 between money I have saved up ($400) and selling my current laptop ($400-600, I think...how much would my laptop sell for assuming it is in fair-good condition? If I got it refurbished, would this increase my profit from selling the laptop?)

I know this is a massive load of questions, but any and all help would be appreciated. Thanks, Ks0stm (TCG) 22:36, 20 April 2010 (UTC)[reply]

I'd say try the free trial first. Your laptop is not that old by most standards. You can investigate its value on ebay or craigslist. But you can't upgrade the video hardware without more hackery than it sounds like you want to deal with. You can get a 1-2 year old nvidia-equipped laptop (almost as fast as the current models) for well under the $800-1000 you mention (Thinkpad T61p might be a good choice), or a new one in the $1000 range if you shop a little bit carefully. forum.thinkpads.com marketplace section is a good place to buy old laptops, especially thinkpads, and thinkpads.com regularly advertises Lenovo specials for different models. I'm not trying to plug the Thinkpad brand--it's just that I use several of them and am familiar with them. 66.127.53.162 (talk) 22:52, 20 April 2010 (UTC)[reply]
Side note, just to make sure you're considering it: you're looking for performance, and desktops have higher performance than equivalently priced laptops, as a rule. Comet Tuttle (talk) 22:55, 20 April 2010 (UTC)[reply]
Yes, I have considered this, but this program would be used to monitor the severe weather from whatever location I was at, whether that be a friend's house, relative's house, my house, the library, or even out of town...so it would be much more convenient to have it on a laptop that I can haul to each of those places rather than rush home every time severe weather threatens. That's my reasoning, anyway. Ks0stm (TCG) 23:02, 20 April 2010 (UTC)[reply]
What are you trying to do with the program? Because you can monitor weather from pretty much anything with an on/off switch and an internet connection. This software looks like it's used for doing much more than "monitoring" the weather, more like modeling it's behavior in high detail. 130.126.222.146 (talk) 00:16, 21 April 2010 (UTC)[reply]
This software is a serious piece of scientific software. If you need it, I don't see the harm in trying the demo on your laptop.
But if you just want to "monitor the severe weather" I suggest you simply point your web-browser towards the web-site for the National Weather Service. For instance, if you live in Wichita, you could just bookmark this page. I assume you know all this, given your editing history, but your comment about using this heavy-duty piece of software for casual "monitoring" at a friend's house made me wonder.
To answer your actual question, I don't believe that the D600 series can be upgraded to use a new video card. (I'm basing that mostly on [2][3], and the idea that if a notebook supports this, they usually advertise it.) If you need this software on a laptop, I suspect you're going to need a new laptop. You'll need one with a nice video card, something recent in the Geforce line, perhaps. You'll mostly find these sorts of cards in laptops marketed towards gamers. APL (talk) 00:51, 21 April 2010 (UTC)[reply]
"Monitor" was probably a bit of an understatement...Severe weather is one of those things where I absolutely love anything and everything to do with it, and so my "monitoring" is much more in depth, detailed, and (if I may say so) intense than your average Joe's monitoring. Quite often when tracking severe weather, I find myself desiring to see what the National Weather Service sees on their end, and this program is as close to that as you can get without actually working for them. As for a new laptop, where might I look for one within the price range I can achieve? I know the Dell Outlet store has refurbished laptops fairly cheap, but as far as I know, you can't customize them. Ks0stm (TCG) 01:14, 21 April 2010 (UTC)[reply]
Thinkgeek.com and other such places sell carrying rigs for desktop computers, that are popular with gamers. They are nylon harnesses that wrap around the computer with a carrying handle on top, so you can carry it like a briefcase. As for buying new laptops, I've found it to be worth keeping an eye open for coupon specials at the usual online dealers. For used ones, craigslist (local sellers so you can check out the machine before you buy it) and online forums that you know people on. I wouldn't buy a refurb; they just suck. Refurbs are machines that already failed for someone else, so the vendor just sells it again with or without a half-assed repair. That's even worse than buying a used computer that someone is selling because they're upgrading. 66.127.53.162 (talk) 01:20, 21 April 2010 (UTC)[reply]
Updating laptop hardware is virtually impossible. Forget it - it's simply not going to happen. So let's instead focus our attention on solving your practical problem.
I don't know how interactive this software you want to run is. But assuming it's not, then IMHO, you should seriously consider this: Buy a nice deskside computer - keep the laptop. Hook the deskside up to an always-on Ethernet connection at home and run some kind of remote-desktop gizmo on it. That way, you can take your slow old laptop to these difficult places and simply log into the big powerful deskside computer at home from wherever you are in the world. You'll have all the power of the deskside computer to compute and render the pretty pictures - which will load (albeit rather slowly) onto your laptop over the web. The laptop doesn't have to have any kind of decent CPU, memory or graphics - it just has to work. The deskside computer can be cheaply and easily upgraded whenever you feel the need - so you'll never be in this kind of bind again. SteveBaker (talk) 03:00, 21 April 2010 (UTC)[reply]
Er... how is sending large bitmaps over the network at 10−n frames per second better than rendering them locally? It would only be better if the laptop couldn't render the graphics at all, at any speed, which seems very unlikely to me. It's a relatively recent chip with PS2.0 support. This isn't a bleeding-edge game, it's scientific visualization software. Intel doesn't suck that much. The original poster should at least try the free demo. You think it won't work; well, I think it will. Maybe I'm wrong, but the demo is free, so what's the problem? -- BenRG (talk) 08:12, 21 April 2010 (UTC)[reply]
Upgrade options for laptops are limited. You can upgrade the memory, disk and battery, and can attach external devices like keyboards, mice or monitors; but you are stuck with the same CPU, graphics chip and motherboard it came with when you bought it. Astronaut (talk) 13:06, 21 April 2010 (UTC)[reply]
That's not universally true. But it appears to be true in this case. (Some laptops have removable video cards.) APL (talk) 15:13, 21 April 2010 (UTC)[reply]
Really? In my experience, manufacturers might offer different graphic chipsets, but no upgrade path once you have made your purchase. Do you know which manufacturers offer removable video cards? Astronaut (talk) 15:30, 21 April 2010 (UTC)[reply]
I don't have a model number on hand, but here where I work, until recently we routinely upgraded the video cards of some high-performance Dell laptops that weren't available off-the-shelf in the exact configuration we needed. I never did it myself, but I watched it being done. I just used Google to find a photo-essay on the procedure here : [4]. However, I'm pretty sure that the question-askers' laptop is NOT one of the ones with this capability. 16:01, 21 April 2010 (UTC)
It's sometimes possible to upgrade laptops by swapping motherboards. There are also laptops with docking stations that have internal PCIe slots that you can plug graphics cards into, but then you've got a computer almost as big as a desktop. I like laptops but the main portability obstacle of a desktop IMO is the screen, keyboard, cables, etc. Hauling around just the "box" part of a desktop and connecting to it through a laptop over ethernet or wifi might be a workable compromise. 66.127.53.162 (talk) 20:36, 21 April 2010 (UTC)[reply]
To be clear, some laptops actually have removable video cards. I'm not talking about docking stations, I'm talking about removing the keyboard of your laptop and yanking out its video card and putting in a different one. But the question-asker's lap-top isn't one of them. APL (talk) 00:52, 22 April 2010 (UTC)[reply]
  NODES
Done 8
eth 27
games 2
News 1
orte 1
see 22
Story 1
Users 2