Computing desk | ||
---|---|---|
< August 19 | << Jul | August | Sep >> | August 21 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
From the official Microsoft anouncement " on October 18th ..., Windows 8.1 and Windows RT 8.1 will begin rolling out worldwide as a free update for consumers with Windows 8 or Windows RT devices through the Windows Store. Windows 8.1 will also be available at retail and on new devices starting on October 18th by market. "
It mentions getting it from the Windows Store of from retail places. What about normal Windows update, since it is supposed to be a free upgrade for W8 users? Bubba73 You talkin' to me? 02:32, 20 August 2013 (UTC)
Traditional Unix programming is single-threaded and (disregarding certain nonsense about spurious wakeups and bad packet checksums) operates entirely by blocking until an appropriate event occurs. Careful use of pselect() allows any number of simultaneous conversations with child processes, for instance, to be orchestrated without ever looping over a set of O_NONBLOCK file descriptors.
As seen in the waitpid() rationale, libraries are expected to use waitpid() (of which many variants exist like waitid() and wait4()) instead of wait() (or the obsolete wait3()) to avoid accidentally reaping other child processes. Is there any standard/effective technique for the case where a library organizes the work of many child processes?
Obviously a loop over all of them with WNOHANG "works", but it multiplies the inefficiency of polling with the inefficiency of waiting on each child separately. (A careful client process would invoke such a poll_children() only upon receiving SIGCHLD, but that is still very inefficient if multiple such libraries are in use!) Ideally, a clever library API would support the traditional Unix programming model (including communicating with the child processes as well as waiting on them) without assuming that it controlled all the subprocess-oriented business of the client program. (In particular, this disqualifies the strategy of, say, Twisted; in general, I'm not looking for a non- composable framework.)
This goal is already difficult to achieve in the select() case: I don't know a better solution than for each library to publish its set of file descriptors so that main() can select() across all of them at once. I thus have little hope of doing better in the child process case than to similarly push the multiplexing responsibility off entirely onto main(), which must then inform each library about the fate of its children (and perhaps maintain a table mapping child processes onto libraries!).
We can relax the requirements a bit; for instance, we could allow threading. (I think traditionalist Unix programmers would say that threads used for managing I/O and IPC are a lazy, inefficient solution, but this is a case where they bring elegance to the table as well.) Consider that it solves the select() case trivially: each library defines a blocking process_next_message() function and main() has but to create a thread for each. However, for child processes it is still complicated: the concept closest to a fd_set appears to be a process group, which is ugly for various reasons (it is externally visible, externally mutable, and designed for interactive job control rather than automated coprocessing).
One could also turn the child process problem into the select() problem by having each library create a single child process which then creates all the others and reports on them over a pipe, but that adds considerable complexity (consider the control protocol needed to ask for further child processes).
Subprocesses that work so closely with the parent arise particularly in the the fork()-but-not-exec() case, where it is more standard to simply use threads instead of subprocesses in the first place. However, in some cases subprocesses are indispensable (e.g., languages with a GIL), and (given the single-threaded Unix heritage) it surprises me that the facilities for dealing with them are so primitive.
Well: that got long, even merely mentioning simultaneous communication and organization. Any suggestions? -- Tardis ( talk) 06:37, 20 August 2013 (UTC)
What is the difference between a RJ11-4P4C connector and a RJ10 connector?
Joneleth ( talk) 09:09, 20 August 2013 (UTC)
Problem is, i have a cable that is RJ45 in one end, and what seems to be either RJ11-4P4C or a RJ10 in the other end, I need to buy a new cable now and thus i need to know if theres any difference between the 2 connectors.
Reading through the Modular connector section does not answer this.
Joneleth ( talk) 10:02, 20 August 2013 (UTC)
Yesterday i typed in a celebrity's name and he got 40 million returns on Google. Today he has 9 million. Why did 11 million returns dissapear? 11:52, 20 August 2013 (UTC)
Your math seems really questionable. Joneleth ( talk) 12:03, 20 August 2013 (UTC)
Google result counts are a meaningless metric.-- Shantavira| feed me 16:12, 20 August 2013 (UTC)
start=990
instead of start=10
.
PrimeHunter (
talk)
13:37, 23 August 2013 (UTC)I'm chasing a weird bug and I have a clue, but I don't know what it means.
I have a program that uses OpenGL and Motif running on Red Hat Enterprise Linux 5 workstations. On two machines, the program takes 5 minutes or so to come up, when on all the other machines (five of them), it comes up immediately. All seven machines are loaded the same and have the same hardware.
So here's my clue. If I run the process on either of the two "slow" machines under gdb, or do a pstack while it is hung, it comes up right away. I know this must be telling me something important, but I cannot figure out what.
Any ideas? Tdjewell ( talk) 13:46, 20 August 2013 (UTC)
To prevent the police from looking into files stored on my memory cards, I was wondering if there are simple programs that scramble files by adding random bits to the bits of binary files. I could, of course, do this myself using a hex editor, but I was thinking that there probably exist ready to use programs that do this. You would then send a file containing the random bits to yourself via one channel (e.g. email) and put the modified files on your memory card and carry that with you. And, of course, one can encrypt both these files, the splitting into two parts adds an extra layer of security. Count Iblis ( talk) 17:42, 20 August 2013 (UTC)
You guys have assumed from post 2 that the line "by adding random bits to the bits of binary files" means XORing those... taken literally Iblis wants to just append random bits to files, or, maybe add them on top... I'm not really sure. You guys seem to have gone down the same information hole that the RD is familiar with as usual. Shadowjams ( talk) 04:16, 22 August 2013 (UTC)
Computing desk | ||
---|---|---|
< August 19 | << Jul | August | Sep >> | August 21 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
From the official Microsoft anouncement " on October 18th ..., Windows 8.1 and Windows RT 8.1 will begin rolling out worldwide as a free update for consumers with Windows 8 or Windows RT devices through the Windows Store. Windows 8.1 will also be available at retail and on new devices starting on October 18th by market. "
It mentions getting it from the Windows Store of from retail places. What about normal Windows update, since it is supposed to be a free upgrade for W8 users? Bubba73 You talkin' to me? 02:32, 20 August 2013 (UTC)
Traditional Unix programming is single-threaded and (disregarding certain nonsense about spurious wakeups and bad packet checksums) operates entirely by blocking until an appropriate event occurs. Careful use of pselect() allows any number of simultaneous conversations with child processes, for instance, to be orchestrated without ever looping over a set of O_NONBLOCK file descriptors.
As seen in the waitpid() rationale, libraries are expected to use waitpid() (of which many variants exist like waitid() and wait4()) instead of wait() (or the obsolete wait3()) to avoid accidentally reaping other child processes. Is there any standard/effective technique for the case where a library organizes the work of many child processes?
Obviously a loop over all of them with WNOHANG "works", but it multiplies the inefficiency of polling with the inefficiency of waiting on each child separately. (A careful client process would invoke such a poll_children() only upon receiving SIGCHLD, but that is still very inefficient if multiple such libraries are in use!) Ideally, a clever library API would support the traditional Unix programming model (including communicating with the child processes as well as waiting on them) without assuming that it controlled all the subprocess-oriented business of the client program. (In particular, this disqualifies the strategy of, say, Twisted; in general, I'm not looking for a non- composable framework.)
This goal is already difficult to achieve in the select() case: I don't know a better solution than for each library to publish its set of file descriptors so that main() can select() across all of them at once. I thus have little hope of doing better in the child process case than to similarly push the multiplexing responsibility off entirely onto main(), which must then inform each library about the fate of its children (and perhaps maintain a table mapping child processes onto libraries!).
We can relax the requirements a bit; for instance, we could allow threading. (I think traditionalist Unix programmers would say that threads used for managing I/O and IPC are a lazy, inefficient solution, but this is a case where they bring elegance to the table as well.) Consider that it solves the select() case trivially: each library defines a blocking process_next_message() function and main() has but to create a thread for each. However, for child processes it is still complicated: the concept closest to a fd_set appears to be a process group, which is ugly for various reasons (it is externally visible, externally mutable, and designed for interactive job control rather than automated coprocessing).
One could also turn the child process problem into the select() problem by having each library create a single child process which then creates all the others and reports on them over a pipe, but that adds considerable complexity (consider the control protocol needed to ask for further child processes).
Subprocesses that work so closely with the parent arise particularly in the the fork()-but-not-exec() case, where it is more standard to simply use threads instead of subprocesses in the first place. However, in some cases subprocesses are indispensable (e.g., languages with a GIL), and (given the single-threaded Unix heritage) it surprises me that the facilities for dealing with them are so primitive.
Well: that got long, even merely mentioning simultaneous communication and organization. Any suggestions? -- Tardis ( talk) 06:37, 20 August 2013 (UTC)
What is the difference between a RJ11-4P4C connector and a RJ10 connector?
Joneleth ( talk) 09:09, 20 August 2013 (UTC)
Problem is, i have a cable that is RJ45 in one end, and what seems to be either RJ11-4P4C or a RJ10 in the other end, I need to buy a new cable now and thus i need to know if theres any difference between the 2 connectors.
Reading through the Modular connector section does not answer this.
Joneleth ( talk) 10:02, 20 August 2013 (UTC)
Yesterday i typed in a celebrity's name and he got 40 million returns on Google. Today he has 9 million. Why did 11 million returns dissapear? 11:52, 20 August 2013 (UTC)
Your math seems really questionable. Joneleth ( talk) 12:03, 20 August 2013 (UTC)
Google result counts are a meaningless metric.-- Shantavira| feed me 16:12, 20 August 2013 (UTC)
start=990
instead of start=10
.
PrimeHunter (
talk)
13:37, 23 August 2013 (UTC)I'm chasing a weird bug and I have a clue, but I don't know what it means.
I have a program that uses OpenGL and Motif running on Red Hat Enterprise Linux 5 workstations. On two machines, the program takes 5 minutes or so to come up, when on all the other machines (five of them), it comes up immediately. All seven machines are loaded the same and have the same hardware.
So here's my clue. If I run the process on either of the two "slow" machines under gdb, or do a pstack while it is hung, it comes up right away. I know this must be telling me something important, but I cannot figure out what.
Any ideas? Tdjewell ( talk) 13:46, 20 August 2013 (UTC)
To prevent the police from looking into files stored on my memory cards, I was wondering if there are simple programs that scramble files by adding random bits to the bits of binary files. I could, of course, do this myself using a hex editor, but I was thinking that there probably exist ready to use programs that do this. You would then send a file containing the random bits to yourself via one channel (e.g. email) and put the modified files on your memory card and carry that with you. And, of course, one can encrypt both these files, the splitting into two parts adds an extra layer of security. Count Iblis ( talk) 17:42, 20 August 2013 (UTC)
You guys have assumed from post 2 that the line "by adding random bits to the bits of binary files" means XORing those... taken literally Iblis wants to just append random bits to files, or, maybe add them on top... I'm not really sure. You guys seem to have gone down the same information hole that the RD is familiar with as usual. Shadowjams ( talk) 04:16, 22 August 2013 (UTC)