Python requests.post does not force Elasticsearch to create missing index - python

I want to push data to my Elasticsearch server using :
requests.post('http://localhost:9200/_bulk', data=data_1 + data_2)
and it complains that the index does not exist. I try creating the index manually:
curl -X PUT http://localhost:9200/_bulk
and it complains that I am not feeding a body to it:
{"error":{"root_cause":[{"type":"parse_exception","reason":"request body is required"}],"type":"parse_exception","reason":"request body is required"},"status":400}
Seems like a bit of a chicken and egg problem here. How can I create that _bulk index, and then post my data?
EDIT:
My data is very large to even understand the schema. Here is a small snippet:
'{"create":{"_index":"products-guys","_type":"t","_id":"0"}}\n{"url":"http://www.plaisio.gr/thleoraseis/tv/tileoraseis/LG-TV-43-43LH630V.htm","title":"TV LG 43\\" 43LH630V LED Full HD Smart","description":"\\u039a\\u03b1\\u03b9 \\u03cc\\u03bc\\u03bf\\u03c1\\u03c6\\u03b7 \\u03ba\\u03b1\\u03b9 \\u03ad\\u03be\\u03c5\\u03c0\\u03bd\\u03b7, \\u03bc\\u03b5 \\u03b9\\u03c3\\u03c7\\u03c5\\u03c1\\u03cc \\u03b5\\u03c0\\u03b5\\u03be\\u03b5\\u03c1\\u03b3\\u03b1\\u03c3\\u03c4\\u03ae \\u03b5\\u03b9\\u03ba\\u03cc\\u03bd\\u03b1\\u03c2 \\u03ba\\u03b1\\u03b9 \\u03bb\\u03b5\\u03b9\\u03c4\\u03bf\\u03c5\\u03c1\\u03b3\\u03b9\\u03ba\\u03cc webOS 3.0 \\u03b5\\u03af\\u03bd\\u03b1\\u03b9 \\u03b7 \\u03c4\\u03b7\\u03bb\\u03b5\\u03cc\\u03c1\\u03b1\\u03c3\\u03b7 \\u03c0\\u03bf\\u03c5 \\u03c0\\u03ac\\u03b5\\u03b9 \\u03c3\\u03c4\\u03bf \\u03c3\\u03b1\\u03bb\\u03cc\\u03bd\\u03b9 \\u03c3\\u03bf\\u03c5","priceCurrency":"EUR","price":369.0}\n{"create":{"_index":"products-guys","_type":"t","_id":"1"}}\n{"url":"http://www.plaisio.gr/thleoraseis/tv/tileoraseis/Samsung-TV-43-UE43M5502.htm","title":"TV Samsung 43\\" UE43M5502 LED ...
This is essentially someone else's code, that I need to make work. It seems that the "data" object I am passing to the PUT method is a string.
When I use requests.post('http://localhost:9200/_bulk', data=data)
I get <Response [406]>,

If you want to do a bulk request using requests
response = requests.post('http://localhost:9200/_bulk', data= data=data_1 + data_2, headers={'content-type':'application/json', 'charset':'UTF-8'})
Old Answer
I recommend using the bulk helper from the python library
from elasticsearch import Elasticsearch, helpers
client = Elasticsearch("localhost:9200")
def gendata():
mywords = ['foo', 'bar', 'baz']
for word in mywords:
yield {
"_index": "mywords",
"word": word,
}
resp = helpers.bulk(
client,
gendata(),
index = "some_index",
)
If you didn't touch the elasticsearch configuration a new index will be created on document indexing.
About the ways you tried:
Probably the query is malformed. To do bulk ingest the body shape is different than just sending the docs as a json array.
You are doing PUT instead of post and you have to specify the documents you want to ingest.
There is no need to create the empty index first. Just in case you want to do you can just do :
curl -X PUT http://localhost:9200/index_name

Related

Extract fields from a JSON in Python(Django)

Hello I am new to the python world, and I am learning, I am currently developing a WebApp in Django and I am using ajax for sending requests, what happens is that in the view.py I get a JSON, from which I have not been able to extract the attributes individually to send to a SQL query, I have tried every possible way, I appreciate any help in advance.
def profesionales(request):
body_unicode = request.body.decode('utf-8')
received_json = json.loads(body_unicode)
data = JsonResponse(received_json, safe=False)
return data
Data returns the following to me
{opcion: 2, fecha_ini: "2021-02-01", fecha_fin: "2021-02-08", profesional: "168", sede: "Modulo 7", grafico: "2"}
This is the answer I get and I need to extract each of the values ​​of each key into a variable
You can interpret this as dict.
for key in received_json:
print(key,received_json[key])
# do your stuff here
but if it's always a object with same keys (fixed keys), you can access directly:
key_data = received_json[key]

How to speed up returning a 20MB Json file from a Python-Flask application?

I am trying to call an API which in turn triggers a store procedure from our sqlserver database. This is how I coded it.
class Api_Name(Resource):
def __init__(self):
pass
#classmethod
def get(self):
try:
engine = database_engine
connection = engine.connect()
sql = "DECLARE #return_value int EXEC #return_value = [dbname].[dbo].[proc_name])
return call_proc(sql, apiname, starttime, connection)
except Exception as e:
return {'message': 'Proc execution failed with error => {error}'.format(error=e)}, 400
pass
call_proc is the method where I return the JSON from database.
def call_proc(sql: str, connection):
try:
json_data = []
rv = connection.execute(sql)
for result in rv:
json_data.append(dict(zip(result.keys(), result)))
return Response(json.dumps(json_data), status=200)
except Exception as e:
return {'message': '{error}'.format(error=e)}, 400
finally:
connection.close()
The problem with the output is the way JSON is returned and the size of it.
At first the API used to take 1minute 30seconds: when the return statement was like this:
case1: return Response(json.dumps(json_data), status=200, mimetype='application/json')
After looking online, I found that the above statement is trying to prettify JSON. So I removed mimetype from the response & made it as
case2: return Response(json.dumps(json_data), status=200)
The API runs for 30seconds, although the JSON output is not aligned properly but its still JSON.
I see the output size of the JSON returned from the API is close 20MB. I observed this on postman response:
Status: 200 OK Time: 29s Size: 19MB
The difference in Json output:
case1:
[ {
"col1":"val1",
"col2":"val2"
},
{
"col1":"val1",
"col2":"val2"
}
]
case2:
[{"col1":"val1","col2":"val2"},{"col1":"val1","col2":"val2"}]
Will the difference in output from the two aforementioned cases are different ? If so, how can I fix the problem ?
If there is no difference, is there any way I speed up this further and reduce the run time further more, like compressing the JSON which I am returning ?
You can use gzip compression to make your plain text weight from Megabytes to even Kilobytes. Or even use flask-compress library for that.
Also I'd suggest to use ujson to make dump() call faster.
import gzip
from flask import make_response
import ujson as json
#app.route('/data.json')
def compress():
compression_level = 5 # of 9 max
data = [
{"col1": "val1", "col2": "val2"},
{"col1": "val1", "col2": "val2"}
]
content = gzip.compress(json.dumps(data).encode('utf8'), compression_level)
response = make_response(content)
response.headers['Content-length'] = len(content)
response.headers['Content-Encoding'] = 'gzip'
return response
Documentation:
https://docs.python.org/3/library/gzip.html
https://github.com/colour-science/flask-compress
https://pypi.org/project/ujson/
First of all, profile: if 90% the time is being spent transferring across the network then optimising processing speed is less useful than optimising transfer speed (for example, by compressing the response as wowkin recommended (though the web server may be configured to do this automatically, if you are using one)
Assuming that constructing the JSON is slow, if you control the database code you could use its JSON capabilities to serialise the data, and avoid doing it at the Python layer. For example,
SELECT col1, col2
FROM tbl
WHERE col3 > 42
FOR JSON AUTO
would give you
[
{
"col1": "foo",
"col2": 1
},
{
"col1": "bar",
"col2": 2
},
...
]
Nested structures can be created too, described in the docs.
If the requester only needs the data, return it as a download using flask's send_file feature and avoid the cost of constructing an HTML response:
from io import BytesIO
from flask import send_file
def call_proc(sql: str, connection):
try:
rv = connection.execute(sql)
json_data = rv.fetchone()[0]
# BytesIO expects encoded data; if you can get the server to encode
# the data instead it may be faster.
encoded_json = json_data.encode('utf-8')
buf = BytesIO(encoded_json)
return send_file(buf, mimetype='application/json', as_attachment=True, conditional=True)
except Exception as e:
return {'message': '{error}'.format(error=e)}, 400
finally:
connection.close()
You need to implement pagination on your API. 19MB is absurdly large and will lead to some very annoyed users.
gzip and clevererness with the JSON responses will sadly not be enough, you'll need to put in a bit more legwork.
Luckily, there's many pagination questions and answers, and Flasks modular approach to things will mean that someone probably wrote up a module that's applicable to your problem. I'd start off by re-implementing the method with an ORM. I heard that sqlalchemy is quite good.
To answer your question:
1 - Both JSON are semantically identical.
You can make use of http://www.jsondiff.com to compare two JSON.
2 - I would recommend you to make chunks of your data and send it across network.
This might help:
https://masnun.com/2016/09/18/python-using-the-requests-module-to-download-large-files-efficiently.html
TL;DR; Try restructuring your JSON payload (i.e. change schema)
I see that you are constructing the JSON response in one of your APIs. Currently, your JSON payload looks something like:
[
{
"col0": "val00",
"col1": "val01"
},
{
"col0": "val10",
"col1": "val11"
}
...
]
I suggest you restructure it in such a way that each (first level) key in your JSON represents the entire column. So, for the above case, it will become something like:
{
"col0": ["val00", "val10", "val20", ...],
"col1": ["val01", "val11", "val21", ...]
}
Here are the results from some offline test I performed.
Experiment variables:
NUMBER_OF_COLUMNS = 10
NUMBER_OF_ROWS = 100000
LENGTH_OF_STR_DATA = 5
#!/usr/bin/env python3
import json
NUMBER_OF_COLUMNS = 10
NUMBER_OF_ROWS = 100000
LENGTH_OF_STR_DATA = 5
def get_column_name(id_):
return 'col%d' % id_
def random_data():
import string
import random
return ''.join(random.choices(string.ascii_letters, k=LENGTH_OF_STR_DATA))
def get_row():
return {
get_column_name(i): random_data()
for i in range(NUMBER_OF_COLUMNS)
}
# data1 has same schema as your JSON
data1 = [
get_row() for _ in range(NUMBER_OF_ROWS)
]
with open("/var/tmp/1.json", "w") as f:
json.dump(data1, f)
def get_column():
return [random_data() for _ in range(NUMBER_OF_ROWS)]
# data2 has the new proposed schema, to help you reduce the size
data2 = {
get_column_name(i): get_column()
for i in range(NUMBER_OF_COLUMNS)
}
with open("/var/tmp/2.json", "w") as f:
json.dump(data2, f)
Comparing sizes of the two JSONs:
$ du -h /var/tmp/1.json
17M
$ du -h /var/tmp/2.json
8.6M
In this case, it almost got reduced by half.
I would suggest you do the following:
First and foremost, profile your code to see the real culprit. If it is really the payload size, proceed further.
Try to change your JSON's schema (as suggested above)
Compress your payload before sending (either from your Flask WSGI app layer or your webserver level - if you are running your Flask app behind some production grade webserver like Apache or Nginx)
For large data that you can't paginate using something like ndjson (or any type of delimited record format) can really reduce the server resources needed since you'd be preventing holding the JSON object in memory. You would need to get access to the response stream to write each object/line to the response though.
The response
[ {
"col1":"val1",
"col2":"val2"
},
{
"col1":"val1",
"col2":"val2"
}
]
Would end up looking like
{"col1":"val1","col2":"val2"}
{"col1":"val1","col2":"val2"}
This also has advantages on the client since you can parse and process each line on it's own as well.
If you aren't dealing with nested data structures responding with a CSV is going to be even smaller.
I want to note that there is a standard way to write a sequence of separate records in JSON, and it's described in RFC 7464. For each record:
Write the record separator byte (0x1E).
Write the JSON record, which is a regular JSON document that can also contain inner line breaks, in UTF-8.
Write the line feed byte (0x0A).
(Note that the JSON text sequence format, as it's called, uses a more liberal syntax for parsing text sequences of this kind; see the RFC for details.)
In your example, the JSON text sequence would look as follows, where \x1E and \x0A are the record separator and line feed bytes, respectively:
\x1E{"col1":"val1","col2":"val2"}\x0A\x1E{"col1":"val1","col2":"val2"}\x0A
Since the JSON text sequence format allows inner line breaks, you can write each JSON record as you naturally would, as in the following example:
\x1E{
"col1":"val1",
"col2":"val2"}
\x0A\x1E{
"col1":"val1",
"col2":"val2"
}\x0A
Notice that the media type for JSON text sequences is not application/json, but application/json-seq; see the RFC.

How to download list data from SharePoint Online to a csv (preferably) or json file?

I have accessed a list in SharePoint Online with Python and want to save the list data to a file (csv or json) to transform it and sort some metadata for a migration
I have full access to the Sharepoint site I am connecting(client ID, secret..).
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.runtime.client_request import ClientRequest
from office365.sharepoint.client_context import ClientContext
I have set my settings:
app_settings = {
'url': 'https://company.sharepoint.com/sites/abc',
'client_id': 'id',
'client_secret': 'secret'
}
Connecting to the site:
context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'],
client_secret=app_settings['client_secret'])
ctx = ClientContext(app_settings['url'], context_auth)
Getting the lists and checking the titles:
lists = ctx.web.lists
ctx.load(lists)
ctx.execute_query()
for lista in lists:
print(lista.properties["Title"]) # this gives me the titles of each list and it works.
lists is a ListCollection Object
From the previous code, I see that I want to get the list titled: "Analysis A":
a1 = lists.get_by_title("Analysis A")
ctx.load(a1)
ctx.execute_query() # a1 is a List item - non-iterable
Then I get the data in that list:
a1w = a1.get_items()
ctx.load(a1w)
ctx.execute_query() # a1w is a ListItemCollection - iterable
idea 1: df to json/csv
df1 = pd.DataFrame(a1w) #doens't work)
idea 2:
follow this link: How to save a Sharepoint list as a file?
I get an error while executing the json.loads command:
JSONDecodeError: Extra data: line 1 column 5 (char 4)
Alternatives:
I tried Shareplum, but can't connect with it, like I did with office365-python-rest. My guess is that it doesn't have an authorisation option with client id and client secret (as far as I can see)
How would you do it? Or am I missing something?
Sample test demo for your reference.
context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'],
client_secret=app_settings['client_secret'])
ctx = ClientContext(app_settings['url'], context_auth)
list = ctx.web.lists.get_by_title("ListA")
items = list.get_items()
ctx.load(items)
ctx.execute_query()
dataList = []
for item in items:
dataList.append({"Title":item.properties["Title"],"Created":item.properties["Created"]})
print("Item title: {0}".format(item.properties["Title"]))
pandas.read_json(json.dumps(dataList)).to_csv("output.csv", index = None,header=True)
Idea 1
It's hard to tell what can go wrong without the error trace. But I suspect it's likely to do with malformed data that you are passing as the argument. See here from the documentation to know exactly what's expected.
Do also consider updating your question with relevant stack error traces.
Idea 2
JSONDecodeError: Extra data: line 1 column 5 (char 4)
This error simply means that the Json string is not a valid format. You can validate JSON strings by using this service. This often tells you the point of error which you can then use it to manually fix the problem.
This error could also be caused if the object that is being parsed is a python object. You can avoid this by jsonifying each line as you go
data_list= []
for line in open('file_name.json', 'r'):
data_list.append(json.loads(line))
This avoids storing intermediate python objects. Also see this related issue if nothing works.

Python 3 JSON/Dictionary help , not sure how to parse values?

I am attempting to write out some JSON output into a csv file but first i am trying to understand how the data is structured. I am working from a sample script which connects to an API and pulls down data based a query specified.
The json is returned from the server with this query:
response = api_client.get_search_results(search_id, 'application/json')
body = response.read().decode('utf-8')
body_json = json.loads(body)
If i perform a
print(body_json.keys())
i get the following output:
dict_keys(['events'])
So from this is it right to assume that the entries i am really interested in are another dictionary inside the events dictionary?
If so how can i 'access' them?
Sample JSON data the search query returns to the variable above
{
"events":[
{
"source_ip":"10.143.223.172",
"dest_ip":"104.20.251.41",
"domain":"www.theregister.co.uk",
"Domain Path":"NULL",
"Domain Query":"NULL",
"Http Method":"GET",
"Protocol":"HTTP",
"Category":"NULL",
"FullURL":"http://www.theregister.co.uk"
},
{
"source_ip":"10.143.223.172",
"dest_ip":"104.20.251.41",
"domain":"www.theregister.co.uk",
"Domain Path":"/2017/05/25/windows_is_now_built_on_git/",
"Domain Query":"NULL",
"Http Method":"GET",
"Protocol":"HTTP",
"Category":"NULL",
"FullURL":"http://www.theregister.co.uk/2017/05/25/windows_is_now_built_on_git/"
},
]
}
Any help would be greatly appreciated.
Json.keys() only returns the keys associated with json.
Here is the code:
for key in json_data.keys():
for i in range(len(json_data[key])):
key2 = json_data[key][i].keys()
for k in key2:
print k + ":" + json_data[key][i][k]
Output:
Http Method:GET
Category:NULL
domain:www.theregister.co.uk
Protocol:HTTP
Domain Query:NULL
Domain Path:NULL
source_ip:10.143.223.172
FullURL:http://www.theregister.co.uk
dest_ip:104.20.251.41
Http Method:GET
Category:NULL
domain:www.theregister.co.uk
Protocol:HTTP
Domain Query:NULL
Domain Path:/2017/05/25/windows_is_now_built_on_git/
source_ip:10.143.223.172
FullURL:http://www.theregister.co.uk/2017/05/25/windows_is_now_built_on_git/
dest_ip:104.20.251.41
To answer your question: yes. Your body_json has returned a dictionary with a key of "events" which contains a list of dictionaries.
The best way to 'access' them would be to iterate over them.
A very rudimentary example:
for i in body_json['events']:
print(i)
Of course, during the iteration you could access the specific data that you needed by replacing print(i) with print(i['FullURL'])and saving it to a variable and so on.
It's important to note that whenever you're working with API's that return a JSON response, you're simply working with dictionaries and Python data structures.
Best of luck.

The JSON syntax vs html/xml tags

The JSON syntax definition say that
html/xml tags (like the <script>...</script> part) are not part of
valid json, see the description at http://json.org.
A number of browsers and tools ignore these things silently, but python does
not.
I'd like to insert the javascript code (google analytics) to get info about the users using this service (place, browsers, OS ...).
What do you suggest to do?
I should solve the problem on [browser output][^1] or [python script][^2]?
thanks,
Antonio
[^1]: Browser output
<script>...</script>
[{"key": "value"}]
[^2]: python script
#!/usr/bin/env python
import urllib2, urllib, json
url="http://.........."
params = {}
url = url + '?' + urllib.urlencode(params, doseq=True)
req = urllib2.Request(url)
headers = {'Accept':'application/json;text/json'}
for key, val in headers.items():
req.add_header(key, val)
data = urllib2.urlopen(req)
print json.load(data)
These sound like two different kinds of services--one is a user-oriented web view of some data, with visualizations, formatting, etc., and one is a machine-oriented data service. I would keep these separate, and maybe build the user view as an extension to the data service.

Categories