I have an issue with my MySQL database. I am programming it in python.
I have 2 tables: Raspberry_data and Operation1.
I must read the data from Operation1 and copy some values from Operation1 to Raspberry_data table. The issue that some columns in Raspberry_data are identical which causes the query to work incorrectly.
Please check the following query:
http://sqlfiddle.com/#!9/a4c2ef/5
I must update Current_operation and ID columns in the Raspberry_data table from the data in Operation1.
The expected result:
Current_operation = 1 ID = 4
Current_operation = 1 ID = 6
However, the result is :
Current_operation = 1 ID = 4
Current_operation = 1 ID = 4
How can I ensure that it copies the individual rows line by line?
I am not able to execute this query for some reason on sqlfiddle but I have tested it on my actual mysql database and the results are the same.
My struggle is not with creating a table, I can create a table. The problem is to populate columns based off of calculations of other tables.
I have looked at How to create all tables defined in models using peewee and this is not helping me do summations and count etc..
I have a hypothetical database (database.db) and created these two tables:
Table 1 (from class User)
id name
1 Jamie
2 Sam
3 Mary
Table 2 (from class Sessions)
id SessionId
1 4121
1 4333
1 4333
3 5432
I simply want to create a new table using peewee:
id name sessionCount TopSession # <- (Session that appears most for the given user)
1 Jamie 3 4333
2 Sam 0 NaN
3 Mary 1 5432
4 ...
Each entry in Table1 and Table2 was created using User.create(...) or Sessions.create(...)
The new table should look at the data that is in the database.db (ie Table1 and Table2) and perform the calculations.
This would be simple in Pandas, but I cant seem to find a query that can do this. Please help
I found it...
query = Sessions.select(fn.COUNT(Sessions.id)).where(Sessions.id==1)
count = query.scalar()
print(count)
>>> 3
# Or:
query = Sessions.select().where(Sessions.id == 1).count()
3
For anyone out there : )
table A :
................
id name age |
................
1 G 29 |
2 A 30 |
................
table B : (table b have the foreign key of table A in tableA_id field which comes multiple)
id phone rank tableA_id
1 98989 A 1
2 98989 C 1
3 98989 D 2
table C : (table C have the foreign key of table A in tableA_id field which comes multiple)
id notes email tableA_id
1 98989 A#gmail.com 1
2 98989 C#gmail.com 1
In my case i am want to get all the data from all the tables and want to display in a single page . the what i want is i want one single query to get the all data from all of
three table with one query set.
And id what i am sending is Table_id = 1
so how can i get the data for table 1 from all the tables can anyone have idea please let me now i am a new be here
Well, probably you can't do it in single query. But using prefetch_related you can load all the related tables, so that DB hits will be reduced. For example:
# if the models are defined like this:
class TableA(models.Model):
name = models.CharField(...)
age = models.IntegerField(...)
class TableB(models.Model):
table_a = models.ForeignKey(TableA)
class TableC(models.Model):
table_a = models.ForeignKey(TableA)
# then the query will be like this
table_data = TableA.objects.filter(pk=1).prefetch_related('tableb', 'tablec')
for data in table_data:
print(data.name)
print(data.tableb.all().values())
print(data.tablec.all().values())
It's just yet hard to me to clearly understand the way that Django makes queries.
I have two tables:
Table A:
+----+-----+----+
| id |code |name|
+----+-----+----+
Table B:
+----+----+
| id |name|
+----+----+
Value of name of both tables can be equal (or not). What I need to do is to get the value of Table A column code, by comparing both tables' name if Table B does match with Table A in any row.
Example:
Table A:
+----+----+----+
| id |code|name|
+----+----+----+
| 4 | A1 |John|
+----+----+----+
Table B:
+----+----+
| id |name|
+----+----+
| 96 |John|
+----+----+
So, by comparing John (B) with John (A), I need A1 to be returned, since it's the code result in the same row that matches on Table A.
In conclusion I need a Django code to do the query:
a_name = 'John'
SELECT code FROM Table_A WHERE name = a_name
Take into account that I only know the value of table B, therefore I can't get the value of code by Table A's name.
Another approach is to use Django's values and values_list methods. You provide the field name you want data for.
values = Table_A.objects.filter(name=B_name).values('code')
This returns a dictionary with only the code values in it. From the django documentation, https://docs.djangoproject.com/en/2.1/ref/models/querysets/#django.db.models.query.QuerySet.values
Or you can use values_list to format the result as a list.
values = Table_A.objects.filter(name=B_name).values_list('code')
This will return a list of tuples, even if you only request one field. The django documentation, https://docs.djangoproject.com/en/2.1/ref/models/querysets/#django.db.models.query.QuerySet.values_list
To try to make this a little more robust, you first get your list of named values from Table_B. Supplying flat=True creates a true list, as values_list will give you a list of tuples. Then use the list to filter on Table_A. You can return just the code or the code and name. As written, it returns a flat list of user codes for every matching name in Table A and Table B.
b_names_list = Table_B.objects.values_list('name', flat=True)
values =Table_A.objects.filter(name__in=b_names_list).values_list('code', flat=True)
Suppose name of your tables are A and B respectively then:
try:
obj = A.objects.get(name='John')
if B.objects.filter(name='John').exists():
print obj.code # found a match and now print code.
except:
pass
Let's suppose TableA and TableB are django models. Then, your query, may look like this:
a_name = 'John'
it_matches_on_b = ( Table_B
.objects
.filter( name = a_name )
.exists()
)
fist_a = ( Table_A
.objects
.filter( name = a_name )
.first()
)
your_code = fist_a.code if it_matches_on_b and fist_a != None else None
I don't comment code because it is self-explanatory. But write questions on comments if you have.
B_name = ‘whatever’
Table_A.objects.filter(name = B_name)
The above is the basic query if you want to get the db fields values connected to name value from Table_A, based on the fact that you know the name value of Table_B
To get the value:
obj = Table_A.objects.get(name = B_name)
print(obj.name)
print(obj.code) # if you want the 'code' field value
Essentially, what I am trying to do is join Table_A to Table_B using a key to do a lookup in Table_B to pull column records for names present in Table_A.
Table_B can be thought of as the master name table that stores various attributes about a name. Table_A represents incoming data with information about a name.
There are two columns that represent a name - a column named 'raw_name' and a column named 'real_name'. The 'raw_name' has the string "code_" before the real_name.
i.e.
raw_name = CE993_VincentHanna
real_name = VincentHanna
Key = real_name, which exists in Table_A and Table_B
Please see the mySQL tables and query here: http://sqlfiddle.com/#!9/65e13/1
For all real_names in Table_A that DO-NOT exist in Table_B I want to store raw_name/real_name pairs into an object so I can send an alert to the data-entry staff for manual insertion.
For all real_names in Table_A that DO exist in Table_B, which means we know about this name and can add the new raw_name associated with this real_name into our master Table_B
In mySQL, this is easy to do as you can see in my sqlfidde example. I join on real_name and I compress/collapse the result by groupby a.real_name since I don't care if there are multiple records in Table_B for the same real_name.
All I want is to pull the attributes (stats1, stats2, stats3) so I can assign them to the newly discovered raw_name.
In the mySQL query result I can then separate the NULL records to be sent for manual data-entry and automatically insert the remaining records into Table_B.
Now, I am trying to do the same in Pandas but am stuck at the point of groupby on real-name.
e = {'raw_name': pd.Series(['AW103_Waingro', 'CE993_VincentHanna', 'EES43_NeilMcCauley', 'SME16_ChrisShiherlis',
'MEC14_MichaelCheritto', 'OTP23_RogerVanZant', 'MDU232_AlanMarciano']),
'real_name': pd.Series(['Waingro', 'VincentHanna', 'NeilMcCauley', 'ChrisShiherlis', 'MichaelCheritto',
'RogerVanZant', 'AlanMarciano'])}
f = {'raw_name': pd.Series(['SME893_VincentHanna', 'TVA405_VincentHanna', 'MET783_NeilMcCauley',
'CE321_NeilMcCauley', 'CIN453_NeilMcCauley', 'NIPS16_ChrisShiherlis',
'ALTW12_MichaelCheritto', 'NSP42_MichaelCheritto', 'CONS23_RogerVanZant',
'WAUE34_RogerVanZant']),
'real_name': pd.Series(['VincentHanna', 'VincentHanna', 'NeilMcCauley', 'NeilMcCauley', 'NeilMcCauley',
'ChrisShiherlis', 'MichaelCheritto', 'MichaelCheritto', 'RogerVanZant',
'RogerVanZant']),
'stats1': pd.Series(['meh1', 'meh1', 'yo1', 'yo1', 'yo1', 'hello1', 'bye1', 'bye1', 'namaste1',
'namaste1']),
'stats2': pd.Series(['meh2', 'meh2', 'yo2', 'yo2', 'yo2', 'hello2', 'bye2', 'bye2', 'namaste2',
'namaste2']),
'stats3': pd.Series(['meh3', 'meh3', 'yo3', 'yo3', 'yo3', 'hello3', 'bye3', 'bye3', 'namaste3',
'namaste3'])}
df_e = pd.DataFrame(e)
df_f = pd.DataFrame(f)
df_new = pd.merge(df_e, df_f, how='left', on='real_name', suffixes=['_left', '_right'])
df_new_grouped = df_new.groupby(df_new['raw_name_left'])
Now how do I compress/collapse the groups in df_new_grouped on real-name like I did in mySQL.
Once I have an object with the collapsed results I can slice the dataframe to report real_names we don't have a record of (NULL values) and those that we already know and can store the newly discovered raw_name.
You can drop duplicates based on columns raw_name_left and also remove the raw_name_right column using drop
In [99]: df_new.drop_duplicates('raw_name_left').drop('raw_name_right', 1)
Out[99]:
raw_name_left real_name stats1 stats2 stats3
0 AW103_Waingro Waingro NaN NaN NaN
1 CE993_VincentHanna VincentHanna meh1 meh2 meh3
3 EES43_NeilMcCauley NeilMcCauley yo1 yo2 yo3
6 SME16_ChrisShiherlis ChrisShiherlis hello1 hello2 hello3
7 MEC14_MichaelCheritto MichaelCheritto bye1 bye2 bye3
9 OTP23_RogerVanZant RogerVanZant namaste1 namaste2 namaste3
11 MDU232_AlanMarciano AlanMarciano NaN NaN NaN
Just to be thorough, this can also be done using Groupby, which I found on Wes McKinney's blog although drop_duplicates is cleaner and more efficient.
http://wesmckinney.com/blog/filtering-out-duplicate-dataframe-rows/
>index = [gp_keys[0] for gp_keys in df_new_grouped.groups.values()]
>unique_df = df_new.reindex(index)
>unique_df