-
-
Notifications
You must be signed in to change notification settings - Fork 34k
bpo-37754: make shared_memory's unix implementation consistent with W… #15460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
54a10ac to
0381dbe
Compare
0381dbe to
934ce23
Compare
| if _HAVE_SIGMASK: | ||
| signal.pthread_sigmask(signal.SIG_UNBLOCK, _IGNORED_SIGNALS) | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this blank line
|
Before all, I notice CI failures for MacOS. Can you check the portability of your solution for UNIX systems? |
|
Yes, I have mentioned this issue at https://bugs.python.org/issue37754 (just see the last three comments). Therefore I've proposed another solution (at the issue tracker) which uses shared semaphores to implement reference counting of shared memory segments, which will be portable across UNIX (and macos). I just wanted to get some suggestion on the shared semaphores approach to know if it's the right way to go. |
|
Don't shared semaphores have a problem if a process crashes? Or are they automatically incremented? |
|
@pitrou , Currently a resource_tracker is spawned for every process to keep track of resources and cleanup/free resources if the process crashes. Also, resource_tracker is re-spawned if it crashes, since it is responsible for the cleanup of rescources, if the process crashes or forgets to free them. I was thinking of implementing a shared semaphore for keeping the reference count of shared memory across all the processes in this resource tracker. So, if the process crashes, resource tracker will decrement that semaphore's value, and free up the shared memory if it's value becomes 0. |
|
@vinay0410 Ah, right, that sounds like an interesting idea. |
…indows
For instance:
Let's say a three processes P1, P2 and P3 are trying to communicate using shared memory.
--> P1 creates the shared memory block, and waits for P2 and P3 to access it.
--> P2 starts and attaches this shared memory segment, writes some data to it and exits.
--> Now in case of Unix, shm_unlink is called as soon as P2 exits.
--> Now, P3 starts and tries to attach the shared memory segment.
--> P3 will not be able to attach the shared memory segment in Unix, because shm_unlink has been called on that segment.
--> Whereas, P3 will be able to attach to the shared memory segment in Windows
This Pull request fixes the above issue by using advisory locking on shared memory files to make unix's shared memory consistent with windows, as suggested in the issue thread.
https://bugs.python.org/issue37754