Hi I am trying to write python functional tests for our application. It involves several external components and we are mocking them all out.. We have got a better framework for mocking a service, but not for mocking a database yet.
sqlite is very lite and thought of using them but its a serverless, is there a way I can write some python wrapper to make it a server or I should look at other options like HSQL DB?
I don't understand your problem. Why do you care that it's serverless?
My standard technique for this is:
use SQLAlchemy
in tests, configure it with sqlite:/// or sqlite:///:memory:
Related
I have Python code that looks like this:
db = de_core.db.redshift.get_connection()
...
query = get_query(f"export_user_{user_component}").render()
result = util.execute_query(db, query, user_id=user_id)
And it actually executes sql. I want to write an integration test that tests this sql. The sql is Redshift flavored sql... so like postgresql but not really. What's the best way to test this? Moto doesn't seem to support this kind of test. Are there any libraries that support this kind of integration test where I can mock out the real redshift connection with one that behaves like it?
I want to be able to setup tables in the test, create records, have sql execute against this mock, and return results. Is there anything like this?
In general, mocking a database requires the database engine to run somewhere to execute your SQL.
That is why things like test-containers or embedded-postgres exist. Traditional apps would use these for their integration testing.
But as you noted Redshift is Posgtres-flavored, so if your code is Redshift-specific, then you might actually need Redshift to run your tests, with a test-dedicated database.
Some inspiration to create a wrapper class: https://codereview.stackexchange.com/questions/123143/unit-tests-for-a-redshift-wrapper-class
We have made an python client which is used as an interface for user. some function is defined in the client which internally calls the APIs and give output to users.
My requirement is to automate the python client - functions and validate the output.
Please suggest tools to use.
There are several ways to do that:
You can write multiple tests for your application as the test cases which are responsible to call your functions and get the result and validate them. It calls the "feature test". To do that, you can use the python "unittest" library and call the tests periodically.
If you have a web application you can use "selenium" to make automatic test flows. (Also you can run it in a docker container virtually)
The other solution is to write another python application to call your functions or send requests everywhere you want to get the specific data and validate them. (It's the same with the two other solutions with a different implementation)
The most straightforward way is using Python for this, the simplest solution would be a library like pytest. More comprehensive option would be something like Robot framework
Given you have jmeter in your tags I assume that at some point you will want to make a performance test, however it might be easier to use Locust for this as it's pure Python load testing framework.
If you still want to use JMeter it's possible to call Python programs using OS Process Sampler
I'm building some unit tests for my Python module which interfaces with a MySQL database via SQLAlchemy. From reading around I gather the best way to do this is to create a test database that I can query as if it was the real thing. I've done this however how should I test the existing queries in the module as they currently all point at the live database?
The only idea I'd come up with was to do something like the following:
def run_query(engine, db_name='live_db')
engine.execute(f'SELECT * FROM {db_name}.<table_name>')
I could then pass in test_db when I run the function from unittest. Is there a better way?
For a scalable testing approach, I would suggest having an intermediate DAL Layer than should decide to which DB the query should be routed.
Testing with a Test Database
Is there any python library that can keep a client-side SQLite database in sync with a server-side PostgreSQL database?
There are solutions for Java, such as Daffodil or SymmetricDS. Is there something similar for python?
SymmetricDS is a server-side solution for synchronization that gets triggered regardless of which language is being used to access the database. You should still be able to use that to synchronize the databases, while using Python libraries to actually query them. I would recommend sqlalchemy as a good database-independent query layer for Python.
I want to write some unittests for an application that uses MySQL. However, I do not want to connect to a real mysql database, but rather to a temporary one that doesn't require any SQL server at all.
Any library (I could not find anything on google)? Any design pattern? Note that DIP doesn't work since I will still have to test the injected class.
There isn't a good way to do that. You want to run your queries against a real MySQL server, otherwise you don't know if they will work or not.
However, that doesn't mean you have to run them against a production server. We have scripts that create a Unit Test database, and then tear it down once the unit tests have run. That way we don't have to maintain a static test database, but we still get to test against the real server.
I've used python-mock and mox for such purposes (extremely lightweight tests that absolutely cannot require ANY SQL server), but for more extensive/in-depth testing, starting and populating a local instance of MySQL isn't too bad either.
Unfortunately SQLite's SQL dialect and MySQL's differ enough that trying to use the former for tests is somewhat impractical, unless you're using some ORM (Django, SQLObject, SQLAlchemy, ...) that can hide the dialect differences.