Conversation
|
I think it would be something like (with conversation_timeout=10): t=0 user sends /start t=5 user sends /brew t=10 CANCEL-JOB-1 runs t=12 user sends /pourCoffee, but the conversation has been canceled t=15 CANCEL-JOB-2 runs |
|
Uh oh, I originally thought this wasn't super high priority because conversation_timeout was new and it would only trigger if someone changed their bot to use it, but that's not actually the case. The existing test_end_on_first_message test almost exposes the problem, but the relevant exception is suppressed by Dispatcher.process_update's exception catch-all. I'm pretty sure this actually affects all current users of ConversationHandler whenever an entry state transitions directly to END or whenever any state transitions to END without having conversation_timeout set. |
telegram/ext/conversationhandler.py
Outdated
| timeout_job = self.timeout_jobs.get(self.current_conversation) | ||
|
|
||
| if timeout_job is not None or new_state == self.END: | ||
| if timeout_job is not None and new_state == self.END: |
There was a problem hiding this comment.
So with this logic, this new test:
def test_conversation_timeout_keeps_extending(self, dp, bot, user1):
handler = ConversationHandler(entry_points=self.entry_points, states=self.states,
fallbacks=self.fallbacks, conversation_timeout=0.5)
dp.add_handler(handler)
# Start state machine, wait, do something, verify the timeout is extended.
# t=0 /start (timeout=.5)
# t=.25 /brew (timeout=.75)
# t=.5 original timeout
# t=.6 /pourCoffee (timeout=1.1)
# t=.75 second timeout
# t=1.1 actual timeout
message = Message(0, user1, None, self.group, text='/start', bot=bot)
dp.process_update(Update(update_id=0, message=message))
assert handler.conversations.get((self.group.id, user1.id)) == self.THIRSTY
sleep(0.25) # t=.25
dp.job_queue.tick()
assert handler.conversations.get((self.group.id, user1.id)) == self.THIRSTY
message.text = '/brew'
dp.process_update(Update(update_id=0, message=message))
assert handler.conversations.get((self.group.id, user1.id)) == self.BREWING
sleep(0.35) # t=.6
dp.job_queue.tick()
assert handler.conversations.get((self.group.id, user1.id)) == self.BREWING
message.text = '/pourCoffee'
dp.process_update(Update(update_id=0, message=message))
assert handler.conversations.get((self.group.id, user1.id)) == self.DRINKING
sleep(.4) # t=1
dp.job_queue.tick()
assert handler.conversations.get((self.group.id, user1.id)) == self.DRINKING
sleep(.1) # t=1.1
dp.job_queue.tick()
assert handler.conversations.get((self.group.id, user1.id)) is Nonefails at the t=.6 test:
assert handler.conversations.get((self.group.id, user1.id)) == self.BREWING
sleep(0.35) # t=.6
dp.job_queue.tick()
> assert handler.conversations.get((self.group.id, user1.id)) == self.BREWING
E assert None == 1
However, if this is changed to just if timeout_job is not None:, everything's happy.
It seems another problem was when the job executed the timeout, it wasn;t removed from `self.conversation_timeouts` which made it still fail because job would be present in the handler dict, although it was already disabled. This should fix it properly.
nmlorg
left a comment
There was a problem hiding this comment.
LGTM, for what it's worth :)
Is there an easy way for @alexbagirov to apply this locally to verify?
|
Hello again, will try it as soon as I can. |
|
What he could do is clone the library, checkout the branch and pip install from the cloned env. |
As found by @nmlorg and described in #1031
Went for the easiest fix.
fixes #1031