Computing desk | ||
---|---|---|
< July 29 | << Jun | July | Aug >> | July 31 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Will Mobile Station(MS) Assisted GPS calculate the location of the mobile regardless of emergency call?
Usually Mobile Station Assisted GPS calculates position of the mobile in network server in terms of latitude and longitude and Mobile Station Based calculates the position of the mobile in mobile itself. Both of these methods are used during emergency call. Is it possible to calculates the position of the mobile even if the call is not emergency call?
I request to clear my doubts at your earliest.
Thank you,
JOHN ROSEs ( talk) 11:24, 30 July 2013 (UTC) JOHN ROSE 30-JULY-2013
I am experimenting a bit with C++ containers. I've written some code using vectors. The code looks valid to me according to what I've read at http://www.programmingincpp.com/vector-and-list.html.
#include <iostream> #include <vector> #include <string> using namespace std; int main() { string lowstr; string upstr; cout << "Enter lower bound of search interval: " << endl; cin >> lowstr; cout << "Enter upper bound of search interval: " << endl; cin >> upstr; int lowlength = (unsigned) lowstr.size(); int uplength = (unsigned) upstr.size(); vector<char> lowbound(lowlength); vector<char> upbound(uplength); for(int i = 0; i = lowlength; i++){ lowbound.push_back(lowstr.at(i)); } for (int j = 0; j = uplength; j++){ upbound.push_back(upstr.at(j)); } cout << lowbound0 << endl; cout << upbound0 << endl; return 0; }
The code seems to compile. However, when I execute the program, after entering the two values, I get the error
terminate called after throwing an instance of 'std::out_of_range'
what(): basic_string::at
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
In addition I get the typical Windows error telling me the exe isn't functioning properly anymore. Any ideas what the problem is here and how it could be fixed? -- Toshio Yamaguchi 12:26, 30 July 2013 (UTC)
i = lowlength;
with i < lowlength;
in the loop condition. --
Toshio
Yamaguchi
13:12, 30 July 2013 (UTC)for j
loop. When your program enters the loop, it initializes j with zero, then it substitutes uplength to j. Next it checks the value, which turns out non-zero, so it goes to at – and it uses the j value, which is equal upstr.size(), and that, I suppose, causes the error. --
CiaPan (
talk)
14:35, 30 July 2013 (UTC)pushback
appends the symbol at position i
in lowstr
behind the end of the vector. --
Toshio
Yamaguchi
20:23, 30 July 2013 (UTC)I've had a theoretical idea for a while, and i spoke with a computer science professor of mine to confirm the possibility of it. The idea is a method of extreme data compression. Theoretically, a data compression program that takes up a large amount of disk space could be crated, being large due to many mappings between raw data and its compressed equivalent. The larger such a program would be, the more compressed it could make things. At the extreme side of this, a compressed file could simply contain the character "i", which would possibly decompress into the entire directory tree and files needed to play World of Warcraft or something like this, since the letter "i" could be mapped to the binary code for a rar file containing such data. I wouldn't mind programming a 4GB compression program if it meant 1TB of media would fit in 50GB of space. ;)
The only problem I'm thinking of is that if we are looking at compressing programs or movies or etc, we are looking at compressing sets of binary.... which can only be compressed into.... binary! In other words, we have a raw code, from which are a number of raw characters, and there is then a compressed code, which may contain a number of characters representing the compressed equivalent. If the cardinality of the raw and compressed character set is the same, we cant really have a mapping of raw -> compressed that saves us anything, unless there are unused or wasted combinations of characters in the raw set. It would need to be the case that the cardinality of available characters in the raw text/binary/whatever is LESS than the cardinality of the compressed representation. For example: English words -> binary. There are about 250,000 words in the English language, according to one source. Any of these words could be encoded using 18 bits in binary (which could encode 262,144 words). Since each letter of a word takes 8 bits to encode, this means that every word could be encoded in just over 2 characters. Since the average word is longer than 2 letters, we have compression!
What ways can my idea be realized? I thought it as simple as having a large mapping from one set to another, but as i stated above, i no longer think this is the case. The only thing i can think of is taking an original binary string, and passing it through the Huffman algorithm maybe as many as 10 times, which should make the string of binary smaller and smaller, since we are mapping from binary to binary, and the compression is gained when we see lots of repetition!
Any insight on this is highly valued and appreciated. Thanks!
216.173.145.47 ( talk) 19:01, 30 July 2013 (UTC)
This makes sense, and i read the linked info and understand why the pigeonhole principle makes it not possible to compress arbitrary data. However, what about my Huffman algorithm idea? It would compress the data by finding patterns in say, blocks of 8 bytes, then take the output and put patterns into that etc..... Of course, for each encoding there would need to be a decode key, so the amount of saved data would hopefully be enough to more than offset a reasonably sized file header saying what symbols represent what. Wouldn't this work?
216.173.145.47 ( talk) 19:07, 31 July 2013 (UTC)
I have a Windows 8 computer (came with Win 7) and restarting takes close to 10 minutes to get back to a functioning desktop. If I shut it down, unplug the power cord, plug it back in, and turn it on, it boots much, much faster. (I have done it this way only twice and I haven't timed it.) Could there be a reason for this? Bubba73 You talkin' to me? 19:13, 30 July 2013 (UTC)
Update: Microsoft Answers is helping me with the problem. In the event log, the check for a Microsoft Office license is in there hundreds of times in a row. That might be the problem. Bubba73 You talkin' to me? 14:43, 1 August 2013 (UTC)
I was reading an interesting article how YouTube and Netflix streams can seem really bad even on high-bandwidth broadband connections [3]. In the reader comments, a reader mentioned he was able to "tunnel YouTube to a VPS" and get HD streams without any problems [4]. I hadn't even heard of VPS until this. Besides subscribing to the VPS service, how do you set it up to tunnel streaming video to bypass the ISP's video throttling? -- 157.254.210.11 ( talk) 23:08, 30 July 2013 (UTC)
Computing desk | ||
---|---|---|
< July 29 | << Jun | July | Aug >> | July 31 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
Will Mobile Station(MS) Assisted GPS calculate the location of the mobile regardless of emergency call?
Usually Mobile Station Assisted GPS calculates position of the mobile in network server in terms of latitude and longitude and Mobile Station Based calculates the position of the mobile in mobile itself. Both of these methods are used during emergency call. Is it possible to calculates the position of the mobile even if the call is not emergency call?
I request to clear my doubts at your earliest.
Thank you,
JOHN ROSEs ( talk) 11:24, 30 July 2013 (UTC) JOHN ROSE 30-JULY-2013
I am experimenting a bit with C++ containers. I've written some code using vectors. The code looks valid to me according to what I've read at http://www.programmingincpp.com/vector-and-list.html.
#include <iostream> #include <vector> #include <string> using namespace std; int main() { string lowstr; string upstr; cout << "Enter lower bound of search interval: " << endl; cin >> lowstr; cout << "Enter upper bound of search interval: " << endl; cin >> upstr; int lowlength = (unsigned) lowstr.size(); int uplength = (unsigned) upstr.size(); vector<char> lowbound(lowlength); vector<char> upbound(uplength); for(int i = 0; i = lowlength; i++){ lowbound.push_back(lowstr.at(i)); } for (int j = 0; j = uplength; j++){ upbound.push_back(upstr.at(j)); } cout << lowbound0 << endl; cout << upbound0 << endl; return 0; }
The code seems to compile. However, when I execute the program, after entering the two values, I get the error
terminate called after throwing an instance of 'std::out_of_range'
what(): basic_string::at
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
In addition I get the typical Windows error telling me the exe isn't functioning properly anymore. Any ideas what the problem is here and how it could be fixed? -- Toshio Yamaguchi 12:26, 30 July 2013 (UTC)
i = lowlength;
with i < lowlength;
in the loop condition. --
Toshio
Yamaguchi
13:12, 30 July 2013 (UTC)for j
loop. When your program enters the loop, it initializes j with zero, then it substitutes uplength to j. Next it checks the value, which turns out non-zero, so it goes to at – and it uses the j value, which is equal upstr.size(), and that, I suppose, causes the error. --
CiaPan (
talk)
14:35, 30 July 2013 (UTC)pushback
appends the symbol at position i
in lowstr
behind the end of the vector. --
Toshio
Yamaguchi
20:23, 30 July 2013 (UTC)I've had a theoretical idea for a while, and i spoke with a computer science professor of mine to confirm the possibility of it. The idea is a method of extreme data compression. Theoretically, a data compression program that takes up a large amount of disk space could be crated, being large due to many mappings between raw data and its compressed equivalent. The larger such a program would be, the more compressed it could make things. At the extreme side of this, a compressed file could simply contain the character "i", which would possibly decompress into the entire directory tree and files needed to play World of Warcraft or something like this, since the letter "i" could be mapped to the binary code for a rar file containing such data. I wouldn't mind programming a 4GB compression program if it meant 1TB of media would fit in 50GB of space. ;)
The only problem I'm thinking of is that if we are looking at compressing programs or movies or etc, we are looking at compressing sets of binary.... which can only be compressed into.... binary! In other words, we have a raw code, from which are a number of raw characters, and there is then a compressed code, which may contain a number of characters representing the compressed equivalent. If the cardinality of the raw and compressed character set is the same, we cant really have a mapping of raw -> compressed that saves us anything, unless there are unused or wasted combinations of characters in the raw set. It would need to be the case that the cardinality of available characters in the raw text/binary/whatever is LESS than the cardinality of the compressed representation. For example: English words -> binary. There are about 250,000 words in the English language, according to one source. Any of these words could be encoded using 18 bits in binary (which could encode 262,144 words). Since each letter of a word takes 8 bits to encode, this means that every word could be encoded in just over 2 characters. Since the average word is longer than 2 letters, we have compression!
What ways can my idea be realized? I thought it as simple as having a large mapping from one set to another, but as i stated above, i no longer think this is the case. The only thing i can think of is taking an original binary string, and passing it through the Huffman algorithm maybe as many as 10 times, which should make the string of binary smaller and smaller, since we are mapping from binary to binary, and the compression is gained when we see lots of repetition!
Any insight on this is highly valued and appreciated. Thanks!
216.173.145.47 ( talk) 19:01, 30 July 2013 (UTC)
This makes sense, and i read the linked info and understand why the pigeonhole principle makes it not possible to compress arbitrary data. However, what about my Huffman algorithm idea? It would compress the data by finding patterns in say, blocks of 8 bytes, then take the output and put patterns into that etc..... Of course, for each encoding there would need to be a decode key, so the amount of saved data would hopefully be enough to more than offset a reasonably sized file header saying what symbols represent what. Wouldn't this work?
216.173.145.47 ( talk) 19:07, 31 July 2013 (UTC)
I have a Windows 8 computer (came with Win 7) and restarting takes close to 10 minutes to get back to a functioning desktop. If I shut it down, unplug the power cord, plug it back in, and turn it on, it boots much, much faster. (I have done it this way only twice and I haven't timed it.) Could there be a reason for this? Bubba73 You talkin' to me? 19:13, 30 July 2013 (UTC)
Update: Microsoft Answers is helping me with the problem. In the event log, the check for a Microsoft Office license is in there hundreds of times in a row. That might be the problem. Bubba73 You talkin' to me? 14:43, 1 August 2013 (UTC)
I was reading an interesting article how YouTube and Netflix streams can seem really bad even on high-bandwidth broadband connections [3]. In the reader comments, a reader mentioned he was able to "tunnel YouTube to a VPS" and get HD streams without any problems [4]. I hadn't even heard of VPS until this. Besides subscribing to the VPS service, how do you set it up to tunnel streaming video to bypass the ISP's video throttling? -- 157.254.210.11 ( talk) 23:08, 30 July 2013 (UTC)