I need some help/advice on converting a string to HID key codes that represent keys on a keyboard. These HID codes are bytes and there is a table of a list available here
My original idea was to search a table then use a loop to match characters in a string to a table but unfortunately it hasn't worked for me.
How could I do this in a simple python script? I have tried to search for other answers with no results. The codes will get sent to the /dev/hidg0 which gets processed as a keystroke.
You can use dict for effecient storing of codes table. Also consider str.translate() method.
You can get JSON with HID Usages from official source:
The HID Usage Tables 1.4 document also includes all Usage definitions as a JSON file as an attachment to the PDF. The PDF serves as the 'single' source of truth.
https://www.usb.org/hid
You can use https://www.pdfconvertonline.com/extract-pdf-attachments-online.html to extract zipped json file.
Relevant part:
{
"Kind": "Defined",
"Id": 7,
"Name": "Keyboard/Keypad",
"UsageIds": [
...
{
"Id": 4,
"Name": "Keyboard A",
"Kinds": [
"Sel"
]
},
{
"Id": 5,
"Name": "Keyboard B",
"Kinds": [
"Sel"
]
},
{
"Id": 6,
"Name": "Keyboard C",
"Kinds": [
"Sel"
]
},
{
"Id": 7,
"Name": "Keyboard D",
"Kinds": [
"Sel"
]
},
{
"Id": 8,
"Name": "Keyboard E",
"Kinds": [
"Sel"
]
},
{
"Id": 9,
"Name": "Keyboard F",
"Kinds": [
"Sel"
]
},
{
"Id": 10,
"Name": "Keyboard G",
"Kinds": [
"Sel"
]
},
{
"Id": 11,
"Name": "Keyboard H",
"Kinds": [
"Sel"
]
},
{
"Id": 12,
"Name": "Keyboard I",
"Kinds": [
"Sel"
]
},
{
"Id": 13,
"Name": "Keyboard J",
"Kinds": [
"Sel"
]
},
{
"Id": 14,
"Name": "Keyboard K",
"Kinds": [
"Sel"
]
},
{
"Id": 15,
"Name": "Keyboard L",
"Kinds": [
"Sel"
]
},
{
"Id": 16,
"Name": "Keyboard M",
"Kinds": [
"Sel"
]
},
{
"Id": 17,
"Name": "Keyboard N",
"Kinds": [
"Sel"
]
},
{
"Id": 18,
"Name": "Keyboard O",
"Kinds": [
"Sel"
]
},
{
"Id": 19,
"Name": "Keyboard P",
"Kinds": [
"Sel"
]
},
{
"Id": 20,
"Name": "Keyboard Q",
"Kinds": [
"Sel"
]
},
{
"Id": 21,
"Name": "Keyboard R",
"Kinds": [
"Sel"
]
},
{
"Id": 22,
"Name": "Keyboard S",
"Kinds": [
"Sel"
]
},
{
"Id": 23,
"Name": "Keyboard T",
"Kinds": [
"Sel"
]
},
{
"Id": 24,
"Name": "Keyboard U",
"Kinds": [
"Sel"
]
},
{
"Id": 25,
"Name": "Keyboard V",
"Kinds": [
"Sel"
]
},
{
"Id": 26,
"Name": "Keyboard W",
"Kinds": [
"Sel"
]
},
{
"Id": 27,
"Name": "Keyboard X",
"Kinds": [
"Sel"
]
},
{
"Id": 28,
"Name": "Keyboard Y",
"Kinds": [
"Sel"
]
},
{
"Id": 29,
"Name": "Keyboard Z",
"Kinds": [
"Sel"
]
},
{
"Id": 30,
"Name": "Keyboard 1 and Bang",
"Kinds": [
"Sel"
]
},
{
"Id": 31,
"Name": "Keyboard 2 and At",
"Kinds": [
"Sel"
]
},
{
"Id": 32,
"Name": "Keyboard 3 and Hash",
"Kinds": [
"Sel"
]
},
{
"Id": 33,
"Name": "Keyboard 4 and Dollar",
"Kinds": [
"Sel"
]
},
{
"Id": 34,
"Name": "Keyboard 5 and Percent",
"Kinds": [
"Sel"
]
},
{
"Id": 35,
"Name": "Keyboard 6 and Caret",
"Kinds": [
"Sel"
]
},
{
"Id": 36,
"Name": "Keyboard 7 and Ampersand",
"Kinds": [
"Sel"
]
},
{
"Id": 37,
"Name": "Keyboard 8 and Star",
"Kinds": [
"Sel"
]
},
{
"Id": 38,
"Name": "Keyboard 9 and Left Bracket",
"Kinds": [
"Sel"
]
},
{
"Id": 39,
"Name": "Keyboard 0 and Right Bracket",
"Kinds": [
"Sel"
]
},
If you want only US-ASCII character -> HID usage conversion then just map:
0004:0007..0004:0001D -> a..z
0004:001E..0004:00027 -> 1..9,0
If you need to do more advanced character -> HID usage conversion then youre need to look at keyboard layout mappings (every keyboard layout language maps different keys to a character). Access these mappings are platform specific. Keycodes on different platforms are different. For example keycode -> character tables for Linux are located here: https://github.com/freedesktop/xkeyboard-config/blob/master/symbols/ and Linux keycodes are defined in a special header file: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/input-event-codes.h
Also you can look at Unicode CLDR Project that aims to collect such information. It contains database of different keyboard layouts and maps keyboard buttons -> characters: https://github.com/unicode-org/cldr/tree/main/keyboards
In CLDR, to make it easier to identify keys, the rows on the keyboard are named starting with "A" for the bottom row up to "E" for the top row. The row of keys in the function section are considered to be in row "K". These row names are consistent with those given in ISO9995-1
Related
I started using Python Cubes Olap recently.
I'm trying to sum/avg a JSON postgres column, how can i do this?
my db structure:
events
id
object_type
sn_name
spectra
id
snx_wavelengths (json column)
event_id
my json:
{
"dimensions": [
{
"name": "event",
"levels": [
{
"name": "object_type",
"label": "Object Type",
"attributes": [
"object_type"
]
},
{
"name": "sn_name",
"label": "name",
"attributes": [
"sn_name"
]
}
]
},
{
"name": "spectra",
"levels": [
{
"name": "catalog_name",
"label": "Catalog Name",
"attributes": [
"catalog_name"
]
},
{
"name": "capture_date",
"label": "Capture Date",
"attributes": [
"capture_date"
]
}
]
},
{
"name": "date"
}
],
"cubes": [
{
"id": "uid",
"name": "14G31Yx98ZG8aEhFHjOWNNBmFOETg5APjZo5AiHaqog5YxLMK5",
"dimensions": [
"event",
"spectra",
"date"
],
"aggregates": [
{
"name": "event_snx_wavelengths_sum",
"function": "sum",
"measure": "event.snx_wavelengths"
},
{
"name": "record_count",
"function": "count"
}
],
"joins": [
{
"master": "14G31Yx98ZG8aEhFHjOWNNBmFOETg5APjZo5AiHaqog5YxLMK5.id",
"detail": "spectra.event_id"
},
],
"mappings": {
"event.sn_name": "sn_name",
"event.object_type": "object_type",
"spectra.catalog_name": "spectra.catalog_name",
"spectra.capture_date": "spectra.capture_date",
"event.snx_wavelengths": "spectra.snx_wavelengths",
"date": "spectra.capture_date"
},
}
]
}
I'm getting the follow error:
Unknown attribute ''event.snx_wavelengths''
Anyone can help?
I already tried use mongodb to do the sum, i didnt had success.
This question already has answers here:
How to extract data from dictionary in the list
(3 answers)
Closed 11 months ago.
I have the following json output.
"detections": [
{
"source": "detection",
"uuid": "50594028",
"detectionTime": "2022-03-27T06:50:56Z",
"ingestionTime": "2022-03-27T07:04:50Z",
"filters": [
{
"id": "F2058",
"unique_id": "3638f7c0",
"level": "critical",
"name": "Possible Right-To-Left Override Attack",
"description": "Possible Right-To-Left Override Detected in the Filename",
"tactics": [
"TA0005"
],
"techniques": [
"T1036.002"
],
"highlightedObjects": [
{
"field": "fileName",
"type": "filename",
"value": [
"1465940311.,S=473394(NONAMEFL(Z00057-PIfdp.exe))"
]
},
{
"field": "filePathName",
"type": "fullpath",
"value": "/exports/10_19/mail/12/91/20193/new/1465940311.,S=473394(NONAMEFL(Z00057-PIfdp.exe))"
},
{
"field": "malName",
"type": "detection_name",
"value": "HEUR_RLOTRICK.A"
},
{
"field": "actResult",
"type": "text",
"value": [
"Passed"
]
},
{
"field": "scanType",
"type": "text",
"value": "REALTIME"
}
]
},
{
"id": "F2140",
"unique_id": "5a313874",
"level": "medium",
"name": "Malicious Software",
"description": "A malicious software was detected on an endpoint.",
"tactics": [],
"techniques": [],
"highlightedObjects": [
{
"field": "fileName",
"type": "filename",
"value": [
"1465940311.,S=473394(NONAMEFL(Z00057-PIfdp.exe))"
]
},
{
"field": "filePathName",
"type": "fullpath",
"value": "/exports/10_19/mail/12/91/rs001291-excluido-20193/new/1465940311.,S=473394(NONAMEFL(Z00057-PIfdp.exe))"
},
{
"field": "malName",
"type": "detection_name",
"value": "HEUR_RLOTRICK.A"
},
{
"field": "actResult",
"type": "text",
"value": [
"Passed"
]
},
{
"field": "scanType",
"type": "text",
"value": "REALTIME"
},
{
"field": "endpointIp",
"type": "ip",
"value": [
"xxx.xxx.xxx"
]
}
]
}
],
"entityType": "endpoint",
"entityName": "xxx(xxx.xxx.xxx)",
"endpoint": {
"name": "xxx",
"guid": "d1dd7e61",
"ips": [
"2xx.xxx.xxx"
]
}
}
Inside the 'filters' offset it brings me two levels, one critical and one medim, both with the variable 'name'.
I want to print only the first name, but when I print the 'name', it returns both names:
How do I print only the first one?
If I put print in for filters, it returns both names:
If I put print in for detections, it only returns the second 'name' and that's not what I want:
If you only want to print the name of the first filter, why iterate over it, just index it and print the value under "name":
for d in r['detections']:
print(d['filters'][0]['name'])
I've been struggling with the nested structure in json, how to convert to correct form
{
"id": "0c576f35-d704-4fa8-8cbb-311c6be36358",
"employee_id": null,
"creator_id": "16ca2db9-206c-4e18-891d-a00a5252dbd3",
"closed_by_id": null,
"request_number": 23,
"priority": "2",
"form_id": "urlaub-weitere-abwesenheiten",
"status": "opened",
"name": "Urlaub & weitere Abwesenheiten",
"read_by_employee": false,
"custom_status": {
"id": 15793,
"name": "In Bearbeitung HR"
},
"due_date": null,
"created_at": "2021-03-29T15:18:37.572040+02:00",
"updated_at": "2021-03-29T15:22:15.590156+02:00",
"closed_at": null,
"archived_at": null,
"attachment_count": 1,
"category": {
"id": "payroll-time-management",
"name": "Payroll, Time & Attendance"
},
"public_comment_count": 0,
"form_data": [
{
"field_id": "subcategory",
"values": [
"Time & Attendance - Manage monthly/year-end consolidation and report"
]
},
{
"field_id": "separator-2",
"values": [
null
]
},
{
"field_id": "art-der-massnahme",
"values": [
"Fortbildung"
]
},
{
"field_id": "bezeichnung-der-schulung-kurses",
"values": [
"dfgzhujiko"
]
},
{
"field_id": "startdatum",
"values": [
"2021-03-26"
]
},
{
"field_id": "enddatum",
"values": [
"2021-03-27"
]
},
{
"field_id": "freistellung",
"values": [
"nein"
]
},
{
"field_id": "mit-bildungsurlaub",
"values": [
""
]
},
{
"field_id": "kommentarfeld_fortbildung",
"values": [
""
]
},
{
"field_id": "separator",
"values": [
null
]
},
{
"field_id": "instructions",
"values": [
null
]
},
{
"field_id": "entscheidung-hr-bp",
"values": [
"Zustimmen"
]
},
{
"field_id": "kommentarfeld-hr-bp",
"values": [
"wsdfghjkmhnbgvfcdxsybvnm,"
]
},
{
"field_id": "individuelle-abstimmung",
"values": [
""
]
}
],
"form_files": [
{
"id": 30129,
"filename": "empty_background.png",
"field_id": "anhang"
}
],
"visible_by_employee": false,
"organization_ids": [],
"need_edit_by_employee": false,
"attachments": []
}
using a simple solution with pandas, dataframe
Request = pd.DataFrame.from_dict(pd.json_normalize(data), orient='columns')
it's displaying almost in its correct form:
how to split a dictionary from columns form_data i form_files, I've done a lot of research, but I'm still having a lot of trouble solving this problem, how to split form_data for columns, no rows for meta to ID
You can do something like this.
pass the dataframe and the column to the function as arguments
def explode_node(child_df, column_value):
child_df = child_df.dropna(subset=[column_value])
if isinstance(child_df[str(column_value)].iloc[0], str):
child_df[column_value] = child_df[str(column_value)].apply(ast.literal_eval)
expanded_child_df = (pd.concat({i: json_normalize(x) for i, x in child_df.pop(str(column_value)).items()}).reset_index(level=1,drop=True).join(child_df, how='right', lsuffix='_left', rsuffix='_right').reset_index(drop=True))
expanded_child_df.columns = map(str.lower, expanded_child_df.columns)
return expanded_child_df
I can no longer open a ipynb file in a VS-Code jupyter notebook. I have a problem with this one particular file as my other files open without a problem. I think that the problem comes from an error in the metadata as the metadata in this file looks very different to the metadata structure of the other files that successfully open.
This file used to open without any glitch in the past but now when I try to open the file, I see the following error:
Command failed: C:/Users/Tony/anaconda3/Scripts/activate && conda activate base && echo 'e8b39361-0157-4923-80e1-22d70d46dee6' && python c:\Users\Tony.vscode\extensions\ms-python.python-2020.9.112786\pythonFiles\pyvsc-run-isolated.py c:/Users/Tony/.vscode/extensions/ms-python.python-2020.9.112786/pythonFiles/printEnvVariables.py
Source: Python(Extension)
My understanding is that VS Code cannot find the designated python interpreter, which in turn prevents the kernel from being activated.(I’m not very experienced so please correct me if I’m wrong!).
I have tried copy-pasting the python interpreter from the file that works into the corrupted file but that has not worked.
Please see below a copy of the corrupted JSON file and also an example of the metadata structure of a ipynb file that opens successfully.
All help on this is greatly appreciated!
Many thanks, Tony
Corrupted JSON File
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "numpy_random.ipynb",
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "lhmyI7sB_bKJ",
"colab_type": "text"
},
"source": [
"# Generate Random Numbers"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MulyzIdD_kxf",
"colab_type": "text"
},
"source": [
"`numpy.random` is frequently used for generating random numbers.\n",
"\n",
"For more details, please refer to [Random sampling (numpy.random)](https://docs.scipy.org/doc/numpy/reference/routines.random.html)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "vtrQ-n7PAoYJ",
"colab_type": "code",
"colab": {}
},
"source": [
"import numpy as np"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "nLB8lH051pDg",
"colab_type": "text"
},
"source": [
"`np.random.seed` sets the seed for the generator."
]
},
{
"cell_type": "code",
"metadata": {
"id": "PzxXe5pR3QI0",
"colab_type": "code",
"colab": {}
},
"source": [
"np.random.seed(1)"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "cSSX-VCBAnuJ",
"colab_type": "text"
},
"source": [
"`np.random.rand` generates numbers uniformly distribution over $[0, 1)$"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ran81EdeAKl6",
"colab_type": "code",
"outputId": "9adb1089-0417-490b-f31e-6730df35d688",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"np.random.rand(2, 3) # Generate 2 * 3 random numbers"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[4.17022005e-01, 7.20324493e-01, 1.14374817e-04],\n",
" [3.02332573e-01, 1.46755891e-01, 9.23385948e-02]])"
]
},
"metadata": {
"tags": []
},
"execution_count": 18
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g_jgVxcfAi00",
"colab_type": "text"
},
"source": [
"`np.radnom.randn` generates numbers following standard normal distribtion."
]
},
{
"cell_type": "code",
"metadata": {
"id": "OG-suj-PA72b",
"colab_type": "code",
"outputId": "6fda962e-bc37-4f93-9bdf-aa82b5502117",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"np.random.randn(2, 3) # Generate 2 * 3 random numbers"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[-0.52817175, -1.07296862, 0.86540763],\n",
" [-2.3015387 , 1.74481176, -0.7612069 ]])"
]
},
"metadata": {
"tags": []
},
"execution_count": 19
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SLw958olA9Uu",
"colab_type": "text"
},
"source": [
"`np.random.randint(low, high)` generates integers ranging from `low` (inclusive) to `high` (exclusive)"
]
},
{
"cell_type": "code",
"metadata": {
"id": "SkRVwD_yCd7r",
"colab_type": "code",
"outputId": "72c58ee8-4ed7-433e-8a95-60e58293f827",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 71
}
},
"source": [
"np.random.randint(low = 0, high = 4, size = (3, 3))"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[3, 0, 2],\n",
" [0, 1, 2],\n",
" [2, 0, 3]])"
]
},
"metadata": {
"tags": []
},
"execution_count": 20
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MLSvOE4UEA6n",
"colab_type": "text"
},
"source": [
"`np.random.choice` geneates random numbers following a given pmf."
]
},
{
"cell_type": "code",
"metadata": {
"id": "NidDNwzK4FRY",
"colab_type": "code",
"outputId": "13962d5d-09fb-41cb-fe32-e90186c6bfcd",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 71
}
},
"source": [
"a = np.arange(4)\n",
"p = [0.1, 0.2, 0.3, 0.4]\n",
"np.random.choice(a = a, size=(3, 4), p = p)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[2, 2, 3, 3],\n",
" [3, 3, 0, 2],\n",
" [3, 3, 3, 1]])"
]
},
"metadata": {
"tags": []
},
"execution_count": 21
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9Arc1wQq5a2V",
"colab_type": "text"
},
"source": [
"The sample space is not necessary to be number sets."
]
},
{
"cell_type": "code",
"metadata": {
"id": "7JqVclm44hgH",
"colab_type": "code",
"outputId": "fc86099d-f55f-4d9c-f75f-930e91160710",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 71
}
},
"source": [
"a = ['Spade', 'Heart', 'Clud', 'Diamond']\n",
"p = [0.25, 0.25, 0.25, 0.25]\n",
"np.random.choice(a = a, size=(3, 4), p = p)"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([['Spade', 'Clud', 'Clud', 'Clud'],\n",
" ['Heart', 'Spade', 'Heart', 'Spade'],\n",
" ['Diamond', 'Heart', 'Spade', 'Clud']], dtype='<U7')"
]
},
"metadata": {
"tags": []
},
"execution_count": 22
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "O7pLtNHP5XbL",
"colab_type": "text"
},
"source": [
"It is also possible to draw random samples from other distributions."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Wa7KH_rW6WaU",
"colab_type": "text"
},
"source": [
"Exponential: $f_X(x; \\beta) = \\frac{1}{\\beta}e^{-\\frac{x}{\\beta}}$, $\\beta$ is the scale parameter."
]
},
{
"cell_type": "code",
"metadata": {
"id": "fstao_qt6Yn7",
"colab_type": "code",
"outputId": "e44f81bf-1d3f-4e52-c19b-969cd1da70e0",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 53
}
},
"source": [
"np.random.exponential(scale = 2, size = (2, 3))"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[2.16136241, 0.70905534, 1.18166682],\n",
" [0.50237771, 0.15238928, 1.26688512]])"
]
},
"metadata": {
"tags": []
},
"execution_count": 23
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cS-2YNh162s8",
"colab_type": "text"
},
"source": [
"Binomial: $P_X(k; n,p) = \\binom{n}{k} p^k (1 - p)^{n - k}$."
]
},
{
"cell_type": "code",
"metadata": {
"id": "QCNVWxTx7Utf",
"colab_type": "code",
"outputId": "31fbcc4b-4710-4c67-9ffc-2921a5ddf7e7",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 71
}
},
"source": [
"n = 10\n",
"p = 0.8\n",
"np.random.binomial(n = n, p = p, size = (3, 3))"
],
"execution_count": 0,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([[10, 6, 9],\n",
" [ 8, 10, 6],\n",
" [ 6, 9, 8]])"
]
},
"metadata": {
"tags": []
},
"execution_count": 24
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "e16J3-el71gV",
"colab_type": "text"
},
"source": [
"More distribution types are shown in the docs. "
]
},
{
"cell_type": "code",
"metadata": {
"id": "jkt_2jWu8DBe",
"colab_type": "code",
"colab": {}
},
"source": [
""
],
"execution_count": 0,
"outputs": []
}
]
}
Metadata of JSON file that opens successfully
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.3 64-bit ('base': conda)",
"language": "python",
"name": "python_defaultSpec_1597947257101"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.3-final"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 2
}
I resolved the issue by deleting the metadata from the corrupted file. here is the metadata that I deleted.
"metadata": {
"colab": {
"name": "numpy_random.ipynb",
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
Here is a big piece of JSON data that I fetch in my code below:
{
"status": 200,
"offset": 0,
"limit": 10,
"count": 8,
"total": 8,
"url": "/v2/dictionaries/ldoce5/entries?headword=extra",
"results": [
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 3,
"id": "cqAFDjvvYg",
"part_of_speech": "adverb",
"senses": [
{
"collocation_examples": [
{
"collocation": "one/a few etc extra",
"example": {
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627480.mp3"
}
],
"text": "I got a few extra in case anyone else decides to come."
}
}
],
"definition": [
"in addition to the usual things or the usual amount"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627477.mp3"
}
],
"text": "They need to offer something extra to attract customers."
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDjvvYg"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra-",
"id": "cqAFDk1BDw",
"part_of_speech": "prefix",
"pronunciations": [
{
"audio": [
{
"lang": "American English",
"type": "pronunciation",
"url": "/v2/dictionaries/assets/ldoce/us_pron/extra__pre.mp3"
}
],
"ipa": "ekstrə"
}
],
"senses": [
{
"definition": [
"outside or beyond"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001832333.mp3"
}
],
"text": "extragalactic (=outside our galaxy)"
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDk1BDw"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 1,
"id": "cqAFDjpNZQ",
"part_of_speech": "adjective",
"pronunciations": [
{
"audio": [
{
"lang": "British English",
"type": "pronunciation",
"url": "/v2/dictionaries/assets/ldoce/gb_pron/extra_n0205.mp3"
},
{
"lang": "American English",
"type": "pronunciation",
"url": "/v2/dictionaries/assets/ldoce/us_pron/extra1.mp3"
}
],
"ipa": "ˈekstrə"
}
],
"senses": [
{
"collocation_examples": [
{
"collocation": "an extra ten minutes/three metres etc",
"example": {
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202489.mp3"
}
],
"text": "I asked for an extra two weeks to finish the work."
}
}
],
"definition": [
"more of something, in addition to the usual or standard amount or number"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202484.mp3"
}
],
"text": "Could you get an extra loaf of bread?"
}
],
"gramatical_info": {
"type": "only before noun"
}
}
],
"url": "/v2/dictionaries/entries/cqAFDjpNZQ"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 2,
"id": "cqAFDjsQjH",
"part_of_speech": "pronoun",
"senses": [
{
"collocation_examples": [
{
"collocation": "pay/charge/cost etc extra",
"example": {
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202499.mp3"
}
],
"text": "I earn extra for working on Sunday."
}
}
],
"definition": [
"an amount of something, especially money, in addition to the usual, basic, or necessary amount"
],
"synonym": "more"
}
],
"url": "/v2/dictionaries/entries/cqAFDjsQjH"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra",
"homnum": 4,
"id": "cqAFDjyTn8",
"part_of_speech": "noun",
"senses": [
{
"definition": [
"something which is added to a basic product or service that improves it and often costs more"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001202524.mp3"
}
],
"text": "Tinted windows and a sunroof are optional extras(=something that you can choose to have or not)."
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDjyTn8"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra virgin",
"id": "cqAFDmV2Jw",
"part_of_speech": "adjective",
"senses": [
{
"definition": [
"extra virgin olive oil comes from olives that are pressed for the first time, and is considered to be the best quality olive oil"
]
}
],
"url": "/v2/dictionaries/entries/cqAFDmV2Jw"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra time",
"id": "cqAFDmGZyQ",
"part_of_speech": "noun",
"senses": [
{
"american_equivalent": "overtime",
"definition": [
"a period, usually of 30 minutes, added to the end of a football game in some competitions if neither team has won after normal time"
],
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627835.mp3"
}
],
"text": "The match went into extra time."
}
],
"geography": "especially British English",
"gramatical_examples": [
{
"examples": [
{
"audio": [
{
"type": "example",
"url": "/v2/dictionaries/assets/ldoce/exa_pron/p008-001627834.mp3"
}
],
"text": "Beckham scored in extra time."
}
],
"pattern": "in extra time"
}
]
}
],
"url": "/v2/dictionaries/entries/cqAFDmGZyQ"
},
{
"datasets": [
"ldoce5",
"dictionary"
],
"headword": "extra-sensory perception",
"id": "cqAFDm6ceW",
"part_of_speech": "noun",
"senses": [
{
"definition": [
"ESP"
]
}
],
"url": "/v2/dictionaries/entries/cqAFDm6ceW"
}
]
}
I want to grab and print the definitions offered in the JSON results. I don't know how to express this and am getting a 'list indices must be integers or slices, not str' error for my sense = data['senses'].
#!/usr/bin/env python
import urllib.request
import json
wp = urllib.request.urlopen("http://api.pearson.com/v2/dictionaries/ldoce5/entries?headword=extra").read().decode('utf8')
jsonData=json.loads(wp)
data=jsonData['results']
for item in data:
sense = data['senses']
print(senses['definition'])
sense is actually a list with a single element, a dictionary. The contained dictionary has your desired key-value pair.
For example:
for item in data:
sense = data['senses'][0]
print(sense['definition'])