asyncpg connection was closed - python

I have built a scraper using selenium in one docker container, while a database lives in a small Linode server.
Scraped data will then be inserted into postgres database on Linode.
Scraped data is stored in a List with a Dict format. List[Dict]
However, there are times where this error is presented to me when trying to insert the data.
Client log:
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
Server log:
LOG: could not receive data from client: Connection reset by peer
LOG: could not receive data from client: Operation timed out
I have tried numerous solutions from stackoverflow such as links:
Connection was closed in the middle of operation when accesing database using Python
and also trying to set tcp timeout parameters on postgres side to:\
# - TCP settings -
# see "man tcp" for details
tcp_keepalives_idle = 300 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
tcp_keepalives_interval = 60 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
tcp_keepalives_count = 100 # TCP_KEEPCNT;
but to no avail.
Additionally, I have tried to log any errors in postgres itself but there doesn't seem to be any.
These are my log settings
# - Where to Log -
log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
One theory I can come up with was that, some data takes longer to scrape and format which causes the connection to be reset. However, subsequent inserts would be successful.
Any help would be appreciated! Thanks

Related

Python Cheats pymem address

in the sample script
import pymem
import pymem.process
import pymem.memory
process = pymem.process
mem = pymem.memory
DMC5 = pymem.Pymem("Game.exe")
DMC5_base = DMC5.process_handle
adress = 0x1F1BFF714C8
value = 99
mem.write_int(DMC5_base, adress, value)
the script works fine without any problems. but if I turn off the game and turn it on again, the address will change and you will have to manually insert a new one into the script. Is there any way to enter static data?
To find a reliable pointer, you need to find a static address, plus offsets, that always points to the address you want. This is a common issue when cheating in games via memory modification. Here's a tutorial on how to do it for Cheat Engine: https://www.solarstrike.net/phpBB3/viewtopic.php?t=65
Here's another tutorial on how to do it with MHS + CE: https://progamercity.net/ghack-tut/229-tutorial-maplestory-finding-pointers-ce-amp-mhs.html
Essentially though, the way it's typically done is via using a debugger to get the address of any code that reads or writes the address, and then introspect the assembly code to determine what addresses and pointers were used to get that address. Then, you take the base address it added the pointer to, and then use the debugger to see what reads that value, and what offsets and pointers it uses. You'll usually have to do this 2-3 times before you'll find a static address.
Once you get the pointer offsets and base address, you would then access that memory address via regular pointer logic. Here's an example of how: How do I look up the value of a multi-level pointer inside a process in Python?
You can also use the ReadWriteMemory module.
For example - here's a Python script that reads and writes the multi-level pointer value from Step 8 of the 32-bit Cheat Engine tutorial:
from ReadWriteMemory import ReadWriteMemory
base_address = 0x00400000 # "Tutorial-i386.exe"
static_address_offset = 0x002426E0 # the offset from the base of the static address of the pointer chain
pointer_static_address = base_address + static_address_offset # "Tutorial-i386.exe" + 2426E0
offsets = [0x0C, 0x14, 0x00, 0x18]
rwm = ReadWriteMemory()
process = rwm.get_process_by_name('Tutorial-i386.exe')
process.open()
my_pointer = process.get_pointer(pointer_static_address, offsets=offsets)
pointer_value = process.read(my_pointer)
print(f'Value: {pointer_value}')
value_to_set = int(input('Enter a value: '))
process.write(my_pointer, value_to_set)
Theres a very big chance you will find the offsets listed on some forum i reccomend googling for the offsets

(Raspberry Pi - Model 3 B+) Getting wrong values when reading from serial ports ttyS0 or ttyAMA

I have a python code that I am using to read from the serial ports 'ttyS0' or 'ttyAMA0' at a baudrate of 1.5Mbps.
In my code I basicaly read from the uart buffer, convert the bytes I get to hexadecimal, save the reading output into a list, and whenever I reach 10000 elements on the list, I print the result.
After that I check it manually to see if everything is correct.
import serial
import time
def getUART():
# baudrateDaRede = 1507500
baudrateDaRede = 1500000
# tty = "ttyAMA0"
tty = "ttyS0"
ser = serial.Serial()
ser.port='/dev/'+tty
ser.baudrate=1234
ser.parity=serial.PARITY_EVEN
ser.stopbits=serial.STOPBITS_ONE
ser.bytesize=serial.EIGHTBITS
ser.timeout=0.1
ser.open()
ser.close()
ser = serial.Serial()
ser.port='/dev/'+tty
ser.baudrate=baudrateDaRede
ser.parity=serial.PARITY_EVEN
ser.stopbits=serial.STOPBITS_ONE
ser.bytesize=serial.EIGHTBITS
ser.timeout=None
ser.exclusive = True
ser.open()
time.sleep(2)
index = 0
vet = []
while True:
# time.sleep(0.01)
if ser.inWaiting() == 0:
continue
uartRead = ser.read(ser.inWaiting())
uartReadhex = uartRead.hex()
if not index < 200:
vet.append(uartReadhex)
if len(vet) > 10000:
uartHexa = "".join(vet)
print(uartHexa)
vet.clear()
break
index += 1
getUART()
However, while checking the result sometimes a part of it is missing some 0's or has more 0's than it should
Example: When reading the output, a part of it should be:
(hex) 6807076860027d00000000df1668
However, I am getting this:
(hex) 6807076860027d0000000000df1668
It happens sporadically, but it is enough to make my software not work as intended, since this hexadecimal is the representation of a packet (and in this case it would be a broken packet).
PS: I know for sure that the packet is not really broken, because I have used other hardwares other than raspberry to check it and I get the correct values.
I tried to use minicom to read from the port and I got the same problem, so I think that my code is not the issue in here and maybe it has something to do with the raspberry itself.
I am runing the code at a Raspberry Pi - Model 3 B+ using Raspbian(2019-04-08-raspbian-stretch).
I overclocked it a little, so I am posting below the configuration file of my rapsbian:
# For more options and information see
# http://rpf.io/configtxt
# Some settings may impact device functionality. See link above for details
# uncomment if you get no picture on HDMI for a default "safe" mode
#hdmi_safe=1
# uncomment this if your display has a black border of unused pixels visible
# and your display can output without overscan
disable_overscan=1
# uncomment the following to adjust overscan. Use positive numbers if console
# goes off screen, and negative if there is too much border
#overscan_left=16
#overscan_right=16
#overscan_top=16
#overscan_bottom=16
# uncomment to force a console size. By default it will be display's size minus
# overscan.
#framebuffer_width=1280
#framebuffer_height=720
# uncomment if hdmi display is not detected and composite is being output
#hdmi_force_hotplug=1
# uncomment to force a specific HDMI mode (this will force VGA)
#hdmi_group=1
#hdmi_mode=1
# uncomment to force a HDMI mode rather than DVI. This can make audio work in
# DMT (computer monitor) modes
#hdmi_drive=2
# uncomment to increase signal to HDMI, if you have interference, blanking, or
# no display
#config_hdmi_boost=4
# uncomment for composite PAL
#sdtv_mode=2
#uncomment to overclock the arm. 700 MHz is the default.
arm_freq=1200
enable_uart=1
force_turbo=1
core_freq=425
gpu_freq=425
#test to over clock ttyAMA0
#init_uart_clock=96000000
#init_uart_baud=1500000
# Uncomment some or all of these to enable the optional hardware interfaces
#dtparam=i2c_arm=on
#dtparam=i2s=on
#dtparam=spi=on
# Uncomment this to enable the lirc-rpi module
#dtoverlay=lirc-rpi
#dtoverlay=pi3-disable-bt
# Additional overlays and parameters are documented /boot/overlays/README
# Enable audio (loads snd_bcm2835)
dtparam=audio=on

How to insert EEG triggers from one PsychoPy script into another?

I am working on adding EEG triggers to this script in PsychoPy I wrote through builder mode in PsychoPy, as I am new to coding. The experiment is a series of audio recordings of sentence stems and visual word endings – the recordings and words are called up through a spreadsheet. We are interested in participants’ responses upon viewing the word endings.
Below is my current script without the EEG triggers, and beneath it is a script from someone else with the same system they have used to insert EEG triggers. I am looking to record beginning at the end of the “Sentences” stimulus, including when during “target” and “response," and ending after they make their response.
Thank you very much for any help!
Here is the script I already have:
------Prepare to start Routine “trial1”-------
t = 0
trial1Clock.reset() # clock
frameN = -1
continueRoutine = True
**# update component parameters for each repeat**
target.setColor([1.000,1.000,1.000], colorSpace='rgb')
target.setText(word)
response = event.BuilderKeyResponse()
Sentences.setSound(sounds, secs=6)
**# keep track of which components have finished**
trial1Components = [target, response, Sentences, text_2]
for thisComponent in trial1Components:
if hasattr(thisComponent, 'status'):
thisComponent.status = NOT_STARTED
And here is the code to insert EEG triggers I am trying to integrate:
# Send event marker to NetStation
if mode=='eeg' and stage=='expt':
code = 'item'
ns.sync()
ns.send_event(code, label='item', timestamp=egi.ms_localtime(), table = { 'item' : curr_item })
You say that you "wrote the code using Builder". Did you change the code after Builder? If not, then it's always best to work from Builder itself to allow you to change other aspects of the experiment while keeping your triggers. Assuming that you can work in Builder:
If you send triggers via the parallel port, there's a component for that under I/O --> Parallel port.
Otherwise, you can insert a Code Component to run your code at the desired times:
In the "begin experiment" tab, add import xxxx as ns or however you created the ns object.
In the "begin routine" tab, add your trigger code to mark stimulus onset.
To mark stimulus offset, go to "each frame" tab and either (a) listen for the stimulus status like if stim.status == FINISHED: or (b) send the trigger at the predicted offset setting trigger_sent = False at "begin routine and then if t > 2 and not trigger_sent: (if your stimulus is 2 secs long)

GATT Characteristics with read-property not found by application

I am trying to develop an application that is communicating with an external device using BLE. I have decided to use pygatt (Python) with BGAPI (using a BlueGiga dongle).
The device I am communicating with has a custom primary service with a set of characteristics. According to their specs they have 2 READ characteristics, 8 NOTIFY chars and 1 WRITE char. Initially, I want to read one of the two READ chars, but I am unable to do so. Their UUIDs are not recognized as characteristics. How can this be? I am 100% certain that they are entered correctly.
import pygatt
import bleconnect
import blelib
import logging
logging.basicConfig()
logging.getLogger('pygatt').setLevel(logging.DEBUG)
adapter = pygatt.BGAPIBackend(serial_port='/dev/tty.usbmodem1')
adapter.start()
# Find the device
result = adapter.scan(timeout=5)
for item in result:
scan_name = item['name']
scan_rssi = item['rssi']
scan_address = item['address']
if scan_name == bleconnect.TARGET_NAME:
break
# Connect
device = adapter.connect(address=scan_address)
device.char_read(blelib.CHARACTERISTIC_DEVICE_FEATURES)
I can see in the debug messages that all the NOTIFY and WRITE characteristics are found, but not the two READ characteristics.
What am I missing?
This appears to be some kind of shortcoming in the pygatt API. I managed to find the actual value using bgapi only.

Softlayer API: How to do image capture with specify certain data disk?

I have a vm with disk 1,2,3,4, I want to do some image operations:
Q1: How can i capture the image only contain system disk and disk 3?
Q2: If I achieve the image production described in Q1, can I use this
image install or reload a vm? How SL api to do with the disk 3 in the
image ?
Q3: Can I make a snapshot image only for disk 3?
Q4: If I achieve the image described in Q3, how can I use this
snapshot to init a disk ?
at moment to create the image template you can specify the block devices that you want in the image template you can do that using the API and the portal.
this is an example using the API
"""
Create image template.
The script creates a standard image template, it makes
a call to the SoftLayer_Virtual_Guest::createArchiveTransaction method
sending the IDs of the disks in the request.
For more information please see below.
Important manual pages:
https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest
https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest/createArchiveTransaction
https://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Guest_Block_Device
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <sldn#softlayer.com>
"""
import SoftLayer
# Your SoftLayer API username and key.
USERNAME = 'set me'
API_KEY = 'set me'
# The virtual guest ID you want to create a template
virtualGuestId = 4058502
# The name of the image template
groupName = 'my image name'
# An optional note for the image template
note = 'an optional note'
"""
Build a skeleton SoftLayer_Virtual_Guest_Block_Device object
containing the disks you want to the image.
In this case we are going take an image template of 2 disks
from the virtual machine.
"""
blockDevices = [
{
"id": 4667098,
"complexType": "SoftLayer_Virtual_Guest_Block_Device"
},
{
"id": 4667094,
"complexType": "SoftLayer_Virtual_Guest_Block_Device"
}
]
# Declare a new API service object
client = SoftLayer.Client(username=USERNAME, api_key=API_KEY)
try:
# Creating the transaction for the image template
response = client['SoftLayer_Virtual_Guest'].createArchiveTransaction(groupName, blockDevices, note, id=virtualGuestId)
print(response)
except SoftLayer.SoftLayerAPIError as e:
"""
# If there was an error returned from the SoftLayer API then bomb out with the
# error message.
"""
print("Unable to create the image template. faultCode=%s, faultString=%s" % (e.faultCode, e.faultString))
You only need to get the block devices ID (or disks), for that you can call this method:
http://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest/getBlockDevices
There is some rules for the block devices:
Only block devices of type disk can be captured.
The block device of swap type cannot not be included in the list of block devices to capture.(this is is the disk number 1).
The block device which contains the OS must be included (this is the disk number 0).
The block devices which contain metadata cannot be included in the image.
When you are ordening a new device using this image template you need to keep in mind this:
If you are using the placeOrder method you need to make sure that you are adding the prices for the extra disks.
If you are using the createObject method, the number of disk will be taken from the image template, so it is not neccesary to specify the extra disks.
And also you can use the images templates in reloads, but the reload only afects to the disk wich contains the OS. so If you have a Vitrual machine which contains 3 disks and performs a reload only the disk which contains the OS is afected even if the image template has 3 disks.
In case there are errors in your order due to lack of disk capacity or other issues, at provisioning time there will be errors and the VSI will not be provisioned, likely a ticket will be opened and some softlayer employee will inform you about that.
Regards

Categories