AttributeError: '_thread._local' object has no attribute 'token' - python

There are already questions that address this problem, e.g.
Python: 'thread._local object has no attribute 'todo'
But the solutions doesn't seem to apply to my problem. I make sure to access threding.local() in the same thread that sets the value. I'm trying to use this feature in conjuction with socket server. This is my code
class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler):
def handle(self):
token = str(uuid4())
global_data = threading.local()
global_data.token = token
logger.info(f"threading.local().token: {threading.local().token}") # This line raises the error
The server code I'm using:
class ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
pass
def run_server():
server = ThreadedTCPServer(
(definitions.HOST, definitions.PORT), ThreadedTCPRequestHandler
)
with server:
server.serve_forever()

Your code does this:
Create a brand new threading.local
Store a reference to it in the variable global_data
Give it a token
Create a brand new threading.local
Print its token
Step 5 throws an exception because the new threading.local you created in step 4 does not have a token because it is not the same threading.local you created in step 1.
Perhaps you meant {global_data.token}?

Related

Dictionary returns different values when accessed from different functions

MINIMAL EXAMPLE
Put a print inside the get_tuple_space function:
def get_tuple_space():
global tuple_space
print("VALUE INSIDE GETTER")
print(tuple_space)
return tuple_space
When I call this function inside the file server.py where the tuple_space is declared an it's defined I get the expected result, that is the tuple_space filled with some elements.
while True:
# Get the list sockets which are ready to be read through select
readable,writable,exceptional = select.select(connected_list,[],[])
for sock in readable:
#New connection
if sock == server_socket:
connect()
#Some incoming message from a client
else:
handle_incoming_msg(sock)
print("Trying to get tuple space inside server")
print(get_tuple_space())
However, trying to call this function in my client.py as following gives me and empty dictionary:
import socket, select, string, sys
import server
#some code
#...
#...
while True:
socket_list = [sys.stdin, client_socket]
# Get the list of sockets which are readable
rList, wList, error_list = select.select(socket_list , [], [])
for sock in rList:
#user entered a message
msg=sys.stdin.readline()
linda.blog_out(my_name, my_topic, msg, client_socket)
#HERE
print("trying to get tuple space inside client")
print(server.get_tuple_space())
#reads from topic
messages = linda.blog_rd(my_topic)
print messages
END OF MINIMAL EXAMPLE
UPDATE
Added "global tuple_space" in the functions that uses this dictionary. Also, I tried to call get_tuple_space inside my server code and it returned a correct value. Looks like the problem occurs when I call the function from another code.
END OF UPDATE
I'm having a probably very beginer problem with Python. I'm trying to do some code using sockets and dictionaries but strangely to me my dict returns filled in some functions and empty in another functions. I've made him global, so I was expecting it to returns filled in all my functions.
I believe I'm missing some information about python scope or something. It seems that everything related to the sockets are working fine.
Firstly, my dictionary is initialized in the server.py as following
tuple_space={}
After that, for example, the first function connects the client to the server socket and receives a message containing it's name, topic of interest and message written. Then, my program updates de tuple_space dictionary with some information. The second function does a very similar thing, updating the tuple_space dictionary with new messages from that client about that topic.
Both returns the most updated version of my dictionary and are running in my server code.
def connect():
sockfd, addr = server_socket.accept()
data =str(sockfd.recv(buffer))
name, topic, msg = data.split("#")
connected_list.append(sockfd)
tuple_space[(name, topic)]=""
#add name and address
registered_names.append(name)
tuple_space[(name,topic)] = msg
def handle_incoming_msg(sock):
# Data from client
data = sock.recv(buffer)
name, topic, msg = data.split("#")
tuple_space[(name, topic)] = msg
print(tuple_space)
Both functions are used in my main inside server.py. The first one when a new client connects and sends his name, topic and message, the second one when a connect client sends something:
def main():
server_socket.bind((host, port))
server_socket.listen(10) #listen atmost 10 connection at one time
# Add server socket to the list of readable connections
connected_list.append(server_socket)
print("Servidor inicializado\n")
while True:
# Get the list sockets which are ready to be read through select
readable,writable,exceptional = select.select(connected_list,[],[])
for sock in readable:
#New connection
if sock == server_socket:
connect()
#Some incoming message from a client
else:
handle_incoming_msg(sock)
server_socket.close()
After that, I want to receive the last messages, stored in tuple_space, in my client.py code. Here is what I'm doing inside a loop in my client:
messages_list = linda.blog_rd(my_topic)
if len(messages_list)>0:
print(messages_list)
The blog_rd function is the following, defined in my class "Linda"
def blog_rd(self, topic):
incoming_messages = []
registered_names = get_registered_names()
tuple_space = get_tuple_space()
for i in range(len(registered_names)):
name = registered_names[i]
print(name)
incoming_messages.append((name, tuple_space[(name,topic)]))
return incoming_messages
It was suposed to get the registered names and get the messagens stored in tuple_space, that call the get_tuple_space function.
The problem starts here:
def get_tuple_space():
print(tuple_space)
return tuple_space
When I call this function, the tuple_space dictionary, that is global, returns empty. It also happens with the registered_names list. I'm created that post about the dictionary to simplify my explanation, as the error in registered_names is the same as in tuple_space.
In my project, this function is called on another code file, acessed via import of the server.py. But what is confusing me is that even in the print the tuple_space is empty.
I was expecting that it would print all the values stored, as It does in the previous two functions.
What am I doing wrong?
As far as I understand, tuple_space is declared in a file, and used in one or several others. But all variable in Python are enclosed in the namespace of their module (the file where it is created). So if tuple_space is declared in server.py, you have to import it in another file with from server import tuple_space.
thanks for everyone who tried to help.
As I said, I was having some trouble giving a minimal example as in my minimal example without sockets it worked with your suggestions, but didn't worked in the real project.
What I did to solve was to change how I represent tuple_space. I made it a Class with the needed attributes, that before was into a dictionary. So the server.py uses the object and handles it depending on the messages sended from the server.
Not sure what was hapenning before, but I'm glad I solved the problem somehow.
Thanks!

How to access RequestHandlerClass for HTTPServer in Python once it is set?

I have a custom BaseHTTPServer.BaseHTTPRequestHandler that can (partially) be seen below. It has a special method that allows me to assign a reply generator - a class that takes some data in (for example the values of some of the parameters in a GET request) and generates an XML reply that can then be sent back to the client as a reply to the request:
class CustomHandler(BaseHTTPServer.BaseHTTPRequestHandler):
# implementation
# ...
def set_reply_generator(self, generator):
self.generator = generator
def do_GET(self):
# process GET request
# ...
# and generate reply using generator
reply = self.generator.generate(...) # ... are some parameters that were in the GET request
# Encode
reply = reply.encode()
# set headers
self.__set_headers__(data_type='application/xml', data=reply)
# and send away
self.wfile.write(reply)
self.wfile.write('\n'.encode())
I pass this handler to the BaseHTTPServer.HTTPServer:
def run(addr='127.0.0.1', port=8080):
server_addr = (addr, port)
server = BaseHTTPServer.HTTPServer(server_addr)
server_thread = threading.Thread(target=server.server_forever)
server_thread.start()
Since the constructor of BaseHTTPServer.HTTPServer expects a class and not an instance to be passed as the RequestHandlerClass argument I cannot just create an instance of my handler, call set_reply_generator() and then pass it on. Even if that worked I would still want to be able to access the handler later on (for example if through a POST request the reply generator for the GET requests is changed) and for that I need to know how to retrieve the instance that the server is using.
I've looked here but I was unable to find it (perhaps missed it). I have the bad feeling that it is private (aka __...__).
All I was able to find out so far is that the class of the handler that the server uses can be retrieved through the RequestHandlerClass member, which however is not the same as retrieving an instance of the handler that will allow me to call the set_reply_generator(...).
In addition I tried to actually create an instance of the custom handler but then I landed in a chicken-and-egg issue:
the constructor of the handler requires you to pass the instance of the server
the server requires you to pass the handler
This is sort of an indirect proof that the HTTPServer's constructor is the one, that is responsible for instantiating the handler.
Already answered here. No need to directly access the handler but rather create a static class member that can be access without an instance but still be processed when the handler wants to use it.

How to call a Twisted reactor from a file different from his?

I have a question that could well belong to Twisted or could be directly related to Python.
My problem, as the other is related to the disconnection process in Twisted.
As I read on this site, if I want to I have to perform the following steps:
The server must stop listening.
The client connection must disconnect.
The server connection must disconnect.
According to what I read on the previous page to make the first step would have to run the stopListening method.
In the example mentioned in the web all actions are performed in the same script. Making it easy to access the different variables and methods.
For me I have a server and a client are in different files and different locations.
I have a function that creates a server, and assigns a protocol and want, from the client protocol in another file, make an AMP call to a method for stop the connector.
The call AMP calls the SendMsg command.
class TESTServer(protocol.Protocol):
factory = None
sUsername = ""
credProto = None
bGSuser = None
slot = None
"""
Here was uninteresting code.
"""
# upwards=self.bGSuser, forwarded=True, tx_timestamp=iTimestamp,\
# message=sMsg)
log.msg("self.connector")
log.msg(self.connector)
return {'bResult': True}
SendMsg.responder(vSendMsg)
def _testfunction(self):
logger = logging.getLogger('server')
log.startLogging(sys.stdout)
pf = CredAMPServerFactory()
sslContext = ssl.DefaultOpenSSLContextFactory('key/server.pem',\
'key/public.pem',)
self.connector = reactor.listenSSL(1234, pf, contextFactory = sslContext,)
log.msg('Server running...')
reactor.run()
if __name__ == '__main__':
TESTServer()._testfunction()
The class CredAMPServerFactory assign the corresponding protocol.
class CredAMPServerFactory(ServerFactory):
"""
Server factory useful for creating L{CredReceiver} and L{SATNETServer} instances.
This factory takes care of associating a L{Portal} with the L{CredReceiver}
instances it creates. If the login is succesfully achieved, a L{SATNETServer}
instance is also created.
"""
protocol = CredReceiver
In the "CredReceiver" class I have a call that assigns the protocol to the TestServer class. I do this to make calls using the AMP method "Responder".
self.protocol = SATNETServer
My problem is that when I make the call the program responds with an error indicating that the connector doesn't belong to CredReceiver attribute object.
File "/home/sgongar/Dev/protocol/server_amp.py", line 248, in vSendMsg
log.msg(self.connector)
exceptions.AttributeError: 'CredReceiver' object has no attribute 'connector'
How could I do this? Does anyone know of a similar example of that may take note?
Thank you.
EDIT.
Server side:
server_amp.py
Starts a reactor: reactor.listenSSL(1234, pf, contextFactory =
sslContext,) from within the SATNETServer class.
Assigns protocol, pf, to CredAMPServerFactory class who belongs to module server.py also from within the SATNETServer class.
server.py
Within the class CredAMPServerFactory assigns CredReceiver class to protocol.
Once the connection is established the class SATNETServer is assigned to the protocol.
Client side:
client_amp
Makes a call to the SendMsg method belonging to theSATNETServer class.

Python3 Flask - missing 1 required positional argument: 'self'

I have very simple python code to access Amazon Simple Queue Service. But I get
builtins.TypeError
TypeError: get_queue() missing 1 required positional argument: 'self'
My code:
class CloudQueue(object):
conn = boto.sqs.connect_to_region("eu-west-1",
aws_access_key_id="abc",
aws_secret_access_key="abc")
#app.route('/get/<name>')
def get_queue(self, name):
if(name != None):
queue = self.conn.get_queue(str(name)) <--------- HERE
return conn.get_all_queues()
if __name__ == "__main__":
cq = CloudQueue()
app.debug = True
app.run()
You cannot register methods as routes; at the time the decorator runs the class is still being defined and all you registered is the unbound function object. Since it is not bound to an instance there is no self to pass in.
Do not use a class here; create the connection anew for each request:
#app.route('/get/<name>')
def get_queue(name):
conn = boto.sqs.connect_to_region("eu-west-1",
aws_access_key_id="abc",
aws_secret_access_key="abc")
queue = conn.get_queue(name)
return 'some response string'
You could set it as a global but then you need to make sure you re-create the connection on the first request (so it continues to work even when using a WSGI server using child processes to handle requests):
#app.before_first_request()
def connect_to_boto():
global conn
conn = boto.sqs.connect_to_region("eu-west-1",
aws_access_key_id="abc",
aws_secret_access_key="abc")
#app.route('/get/<name>')
def get_queue(name):
queue = conn.get_queue(name)
return 'some response string'
Use this only if you are sure that boto connection objects are thread-safe.

Accessing python class variable defined inside the main module of a script

I have a django project that uses celery for async task processing. I am using python 2.7.
I have a class in a module client.py in my django project:
# client.py
class Client:
def __init__(self):
# code for opening a persistent connection and saving the connection client in a class variable
...
self.client = <connection client>
def get_connection_client(self):
return self.client
def send_message(self, message):
# --- Not the exact code but this is the function I need to access to for which I need access to the client variable---
self.client.send(message)
# Other functions that use the above method to send messages
...
This class needs to be instantiated only once to create one persistent connection to a remote server.
I run a script connection.py that runs indefinitely:
# connection.py
from client import Client
if __name__ == '__main__':
clientobj = Client()
client = clientobj.get_connection_client()
# Blocking process
while True:
# waits for a message from the remote server
...
I need to access the variable client from another module tasks.py (needed for celery).
# tasks.py
...
from client import Client
#app.task
def function():
# Need access to the client variable
# <??? How do I get an access to the client variable for the
# already established connection???>
message = "Message to send to the server using the established connection"
client.send_message(message)
All the three python modules are on the same machine. The connection.py is executed as a standalone script and is executed first. The method function() in tasks.py is called multiple times across other modules of the project whenever required, thus, I can't instantiate the Client class inside this method. Global variables don't work.
In java, we can create global static variable and access it throughout the project. How do we do this in python?
Approaches I can think of but not sure if they can be done in python:
Save this variable in a common file such that it is accessible in other modules in my project?
Save this client as a setting in either django or celery and access this setting in the required module?
Based on suggestions by sebastian, another way is to share variables between running processes. I essentially want to do that. How do I do this in python?
For those interested to know why this is required, please see this question. It explains the complete system design and the various components involved.
I am open to suggestions that needs a change in the code structure as well.
multiprocessing provides all the tools you need to do this.
connection.py
from multiprocessing.managers import BaseManager
from client import Client()
client = Client()
class ClientManager(BaseManager): pass
ClientManager.register('get_client', callable=lambda: client)
manager = ClientManager(address=('', 50000), authkey='abracadabra')
server = manager.get_server()
server.serve_forever()
tasks.py
from multiprocessing.managers import BaseManager
class ClientManager(BaseManager): pass
ClientManager.register('get_client')
manager = ClientManager(address=('localhost', 50000), authkey='abracadabra')
manager.connect()
client = manager.get_client()
#app.task
def function():
message = "Message to send to the server using the established connection"
client.send_message(message)
I dont have experience working with django, but if they are executed from the same script you could make the Client a singleton, or maybe declaring the Client in the init.py and then import it wherever you need it.
If you go for the singleton, you can make a decorator for that:
def singleton(cls):
instances = {}
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
Then you would define:
# client.py
#singleton
class Client:
def __init__(self):
# code for opening a persistent connection and saving the connection client in a class variable
...
self.client = <connection client>
def get_connection_client(self):
return self.client
Thats all I can suggest with the little description you have given. Maybe try to explain a little better how everything is run or the parts that are involved.
Python has class attributes (attributes that are shared amongst instances) and class methods (methods that act on the class itself). Both are readable on either the class and an instance.
# client.py
class Client(object):
_client = None
#classmethod
def connect(cls):
# dont do anything if already connected
if cls._client is None:
return
# code for opening a persistent connection and saving the connection client in a class variable
...
cls._client = <connection client>
#classmethod
def get_connection_client(cls):
return cls._client
def __init__(self):
# make sure we try to have a connection on initialisation
self.connect()
Now I'm not sure this is the best solution to your problem.
If connection.py is importing tasks.py, you can do it in your tasks.py:
import __main__ # connection.py
main_globals = __main__.__dict__ # this "is" what you getting in connection.py when you write globals()
client = main_globals["client"] # this client has the same id with client in connection.py
BaseManager is also an answer but it uses socket networking on localhost and it is not a good way of accessing a variable if you dont already using multiprocessing. I mean if you need to use multiprocessing, you should use BaseManager. But if you dont need multiprocessing, it is not a good option to use multiprocessing. My code is just taking pointer of "client" variable in connection.py from
interpreter.
Also if you want to use multiprocessing, my code won't work because the interpreters in different processes are different.
Use pickle when reading it from file.

Categories