Fixing date & time format in HTML - python

Context of Question
Hi all, I'm building a Twitter clone. I'm using JavaScript to create & insert a new Tweet into the news feed.
After inserting the new Tweet into the news feed, I noticed the date & time format is different after refreshing the page. Examples below for reference:
Tweet after inserting into news feed & before refreshing the page
Tweet after refreshing the page
The Tweet in JSON format looks like this
{
"id": 56,
"user": "admin",
"content": "yes",
"date": "Feb 07 2023, 12:26 AM",
"likes": 0
}
I'm using Django as the backend & the how a Tweet gets serialized is as follows:
def serialize(self):
return {
"id": self.id,
"user": self.user.username,
"content": self.content,
"date": self.date.strftime("%b %d %Y, %I:%M %p"),
"likes": self.likes
}
Now, I'm not sure what's causing this mismatch of date & time format:
the serializer for the Tweet, or
how HTML handles the date & time format
Django template for Tweet
<small class="text-muted tweet-date">{{tweet.date}}</small>
JavaScript code for inserting Tweet
function createTweet(tweetJsonObject) {
// clone Tweet from Tweet template
let newTweet = document.querySelector(".tweet-template").cloneNode(true);
newTweet.style.display = "block";
newTweet.querySelector(".tweet-date").textContent = tweetJsonObject.date;
newTweet.querySelector(".tweet-content").textContent = tweetJsonObject.content;
newTweet.querySelector(".tweet-likes").textContent = tweetJsonObject.likes;
newTweet.dataset.id = tweetJsonObject.id;
// remove tweet-template & add tweet class
newTweet.classList.remove('tweet-template');
newTweet.classList.add('tweet');
return newTweet;
}

Use the date filter to format a date in a Django template
The following format string should match the output of the one used in your JSON
<small class="text-muted tweet-date">{{ tweet.date|date:"M d Y, h:i A" }}</small>

Related

How to save json data as it is without data type conversion in dynamo db using python

I want to store key-value JSON data in aws DynamoDB where key is a date string in YYYY-mm-dd format and value is entries which is a python dictionary. When I used boto3 client to save data there, it saved it as a data type object, which I don't want. My purpose is simple: Store JSON data against a key which is a date, so that later I will query the data by giving that date. I am struggling with this issue because I did not find any relevant link which says how to store JSON data and retrieve it without any conversion.
I need help to solve it in Python.
What I am doing now:
item = {
"entries": [
{
"path": [
{
"name": "test1",
"count": 1
},
{
"name": "test2",
"count": 2
}
],
"repo": "test3"
}
],
"date": "2022-10-11"
}
dynamodb_client = boto3.resource('dynamodb')
table = self.dynamodb_client.Table(table_name)
response = table.put_item(Item = item)
What actually saved:
[{"M":{"path":{"L":[{"M":{"name":{"S":"test1"},"count":{"N":"1"}}},{"M":{"name":{"S":"test2"},"count":{"N":"2"}}}]},"repo":{"S":"test3"}}}]
But I want to save exactly the same JSON data as it is, without any conversion at all.
When I retrieve it programmatically, you see the difference of single quote, count value change.
response = table.get_item(
Key={
"date": "2022-10-12"
}
)
Output
{'Item': {'entries': [{'path': [{'name': 'test1', 'count': Decimal('1')}, {'name': 'test2', 'count': Decimal('2')}], 'repo': 'test3'}], 'date': '2022-10-12} }
Sample picture:
Why not store it as a single attribute of type string? Then you’ll get out exactly what you put in, byte for byte.
When you store this in DynamoDB you get exactly what you want/have provided. Key is your date and you have a list of entries.
If you need it to store in a different format you need to provide the JSON which correlates with what you need. It's important to note that DynamoDB is a key-value store not a document store. You should also look up the differences in these.
I figured out how to solve this issue. I have two column name date and entries in my dynamo db (also visible in screenshot in ques).
I convert entries values from list to string then saved it in db. At the time of retrival, I do the same, create proper json response and return it.
I am also sharing sample code below so that anybody else dealing with the same situation can have atleast one option.
# While storing:
entries_string = json.dumps([
{
"path": [
{
"name": "test1",
"count": 1
},
{
"name": "test2",
"count": 2
}
],
"repo": "test3"
}
])
item = {
"entries": entries_string,
"date": "2022-10-12"
}
dynamodb_client = boto3.resource('dynamodb')
table = dynamodb_client.Table(<TABLE-NAME>)
-------------------------
# While fetching:
response = table.get_item(
Key={
"date": "2022-10-12"
}
)['Item']
entries_string=response['entries']
entries_dic = json.loads(entries_string)
response['entries'] = entries_dic
print(json.dumps(response))

Is it Possible to Change the Index of JSON after every iteration(Avoiding Duplication of Data )?

What I m basically doing is Scraping data from Websites -> Saving to CSV -> Converting to JSON and Posting the JSON data to Firebase using firebase-import . This is the doc for firebase-import : https://github.com/FirebaseExtended/firebase-import.
Since, firebase doesnt allow creating unique keys via. firebase-import, so the data is completely dependent on the IndexNumber of the JSON data (you can see in the picture, no index number is same , so its considered as unique objects).
This is the Screenshot of Firebase Database and what I mean by Index.
This is same data (the structure) in Raw format :
{
"0": {
"title": "Title 1",
"description": "Description 1 here"
},
"1": {
"title": "Title 2",
"description": "Description 2 here"
},
"2": {
"title": "Title 3",
"description": "Description 3 here"
}
}
I'm using python (pandas module) to convert the received CSV to JSON. This is the code :
csv_file = pd.DataFrame(pd.read_csv("file_without_dupes.csv", sep = ",", header = 0, index_col = False, encoding='utf-8-sig'))
json_file = csv_file.to_json( orient = "index", date_format = "epoch", double_precision = 10, date_unit = "ms", default_handler = None)
file = open("MiningJson.json", "w")
file.write(json_file)
file.close()
The Issue is everytime the Next Conversion happens, the index starts from 0 everytime and increments and Overwrites all the previous indexes. This is what happens when a new data is inserted:
{
"0": {
"title": "Title 4
"description": "Description 4 here"
}
}
{
"1": {
"title": "Title 5
"description": "Description 5 here"
}
}
Also, after every iteration , the json file is cleared completely.
Is it Possible to Avoid this , such as Getting the Last Index of the JSON and saving it a txt file and whenever a new Data is inserted, it should get the last index from the txt file and increment it from their. I think in that way, it will never overrrite the existing data and the Number System is Infinite , so it will be unique every time.
Please mention a better solution if you think of One. Thanks

How to create model instances from csv file

There is a task to parse the csv file and create instances in the database based on the received data. On the backend - DRF and at the front - React.
The specific feature is that the file processing is not quite hidden. The logic is as follows:
There is a button to load the file. The file is loaded and validated, but nothing is created in the database at once. A window appears with a list of saved data (like a table) and in this window there is a new button to confirm by clicking on which the database is already requested.
What I just did:
1. Created a class to download the file (Upload button)
class FileUploadView(APIView):
parser_classes = ( MultiPartParser, FormParser)
renderer_classes = [JSONRenderer]
def put(self, request, format=None):
if 'file' not in request.data:
raise ParseError("Empty content")
f = request.data['file']
filename = f.name
if filename.endswith('.csv'):
file = default_storage.save(filename, f)
r = csv_file_parser(file)
status = 204
else:
status = 406
r = "File format error"
return Response(r, status=status)
In the class, the csv_file_parser function is called, the result of which is json containing all the saved data like this:
{
"1": {
"Vendor": "Firstvendortestname",
"Country": "USA",
...
...
"Modules": "Module1",
" NDA date": "2019-12-24"
},
"2": {
"Vendor": "Secondvendortestname",
"Country": "Canada",
...
...
"Modules": "Module1",
" NDA date": "2019-12-24"
}
}
This data will be used to preview the fields from which the model instaces will be created in the base by clicking on the Confirm button.
csv_file_parser function
def csv_file_parser(file):
result_dict = {}
with open(file) as csvfile:
reader = csv.DictReader(csvfile)
line_count = 1
for rows in reader:
for key, value in rows.items():
if not value:
raise ParseError('Missing value in file. Check the {} line'.format(line_count))
result_dict[line_count] = rows
line_count += 1
return result_dict
When the Confirm button is pressed, React passes this data as an argument to a class that works with the database using the POST method. With the implementation of this class, I have difficulties. How to correctly process the received data and record it in the database?
class CsvToDatabase(APIView):
def post(self, request, format=None):
data = request.data
for vendor in data:
Vendors(
vendor_name=vendor['Vendor'],
country=vendor['Country']
).save()
return Response({'received data': request.data})
This code gives error
TypeError at /api/v1/vendors/from_csv_create/
string indices must be integers
Printing of request.data output
<QueryDict: {'{\n "1": {\n "Vendor": "Firstvendortestname",\n "Country": "USA",\n "Primary Contact Name": "Jack Jhonson",\n "Primary Contact Email": "jack#gmail.com",\n "Secondary Contact Name": "Jack2 Jhonson",\n "Secondary Contact Email": "jack2#gmail.com",\n "Modules": "Module1, Module2",\n " NDA date": "2019-12-24"\n },\n "2": {\n "Vendor": "Secondvendortestname",\n "Country": "Canada",\n "Primary Contact Name": "Sandra Bullock",\n "Primary Contact Email": "sandra#gmail.com",\n "Secondary Contact Name": "Sandra Bullock",\n "Secondary Contact Email": "sandra#gmail.com",\n "Modules": "Module1, Module2",\n " NDA date": "2019-12-24"\n }\n}': ['']}>
Maybe I'm using the wrong data format?
And overall, I have a feeling that I'm doing the job the wrong way. I don't use serializers, do I need them here?
You iterating over dict key, but should iterate over items:
for key, vendor in data.items():
Vendors(
vendor_name=vendor['Vendor'],
country=vendor['Country']
).save()
First of all my suggestion is to use bulk create operation instead of creating them one by one. Please follow this documentation link https://docs.djangoproject.com/en/3.0/ref/models/querysets/#bulk-create.
Your problem is caused because you are following data incorrectly in your loop. My advice is to start searching for the problem from the errors. The error clearly says that the BUG is in the data structure.
Now let's look at the request.data, it is not a list containing dicts to loop them like you are doing there. Please see this StackOverflow page for more details: Extracting items out of a QueryDict

Python - Extract information from dataframe (JSON)

I'm a begginer and it's been a long time I didn't code anything :-) I'm using requests library to retrieve JSON data from the Incapsula(Cloud web security service) API to get some stats about a website. What I want in the end is to write the "type of trafic, timestamp, and number" to a file to create reports.
API Response is something like this :
{
"res": 0,
"res_message": "OK",
"visits_timeseries" : [
{
"id":"api.stats.visits_timeseries.human",
"name":"Human visits",
"data":[
[1344247200000,50],
[1344247500000,40],
...
]
},
{
"id":"api.stats.visits_timeseries.bot",
"name":"Bot visits",
"data":[
[1344247200000,10],
[1344247500000,20],
...
]
}
I'm recovering the Visit_timeseries data like this:
r = requests.post('https://my.incapsula.com/api/stats/v1', params=payload)
reply=r.json()
reply = reply['visits_timeseries']
reply = pandas.DataFrame(reply)
I recover data in that form (date in unix time, number of visit) :
print(reply[['name', 'data']].head())
name data
0 Human visits [[1500163200000, 39], [1499904000000, 73], [14...
1 Bot visits [[1500163200000, 1891], [1499904000000, 1926],...
I don't undestand how to extract the fields I want from the dataframe to write only them into the excel. I would need modify the data field into two rows (date, value). And only the name as the top rows.
What would be great is :
Human Visit Bot Visit
Date Value Value
Date Value Value
Date Value Value
Thanks for your help!
Well, if it is any help, this is a hardcoded version:
import pandas as pd
reply = {
"res": 0,
"res_message": "OK",
"visits_timeseries" : [
{
"id":"api.stats.visits_timeseries.human",
"name":"Human visits",
"data":[
[1344247200000,50],
[1344247500000,40]
]
},
{
"id":"api.stats.visits_timeseries.bot",
"name":"Bot visits",
"data":[
[1344247200000,10],
[1344247500000,20]
]
}
]
}
human_data = reply['visits_timeseries'][0]['data']
bot_data = reply['visits_timeseries'][1]['data']
df_h = pd.DataFrame(human_data, columns=['Date', 'Human Visit'])
df_b = pd.DataFrame(bot_data, columns=['Date', 'Bot Visit'])
df = df_h.append(df_b, ignore_index=True).fillna(0)
df = df.groupby('Date').sum()

Get all MongoDB documents for the whole day [duplicate]

I've been playing around storing tweets inside mongodb, each object looks like this:
{
"_id" : ObjectId("4c02c58de500fe1be1000005"),
"contributors" : null,
"text" : "Hello world",
"user" : {
"following" : null,
"followers_count" : 5,
"utc_offset" : null,
"location" : "",
"profile_text_color" : "000000",
"friends_count" : 11,
"profile_link_color" : "0000ff",
"verified" : false,
"protected" : false,
"url" : null,
"contributors_enabled" : false,
"created_at" : "Sun May 30 18:47:06 +0000 2010",
"geo_enabled" : false,
"profile_sidebar_border_color" : "87bc44",
"statuses_count" : 13,
"favourites_count" : 0,
"description" : "",
"notifications" : null,
"profile_background_tile" : false,
"lang" : "en",
"id" : 149978111,
"time_zone" : null,
"profile_sidebar_fill_color" : "e0ff92"
},
"geo" : null,
"coordinates" : null,
"in_reply_to_user_id" : 149183152,
"place" : null,
"created_at" : "Sun May 30 20:07:35 +0000 2010",
"source" : "web",
"in_reply_to_status_id" : {
"floatApprox" : 15061797850
},
"truncated" : false,
"favorited" : false,
"id" : {
"floatApprox" : 15061838001
}
How would I write a query which checks the created_at and finds all objects between 18:47 and 19:00? Do I need to update my documents so the dates are stored in a specific format?
Querying for a Date Range (Specific Month or Day) in the MongoDB Cookbook has a very good explanation on the matter, but below is something I tried out myself and it seems to work.
items.save({
name: "example",
created_at: ISODate("2010-04-30T00:00:00.000Z")
})
items.find({
created_at: {
$gte: ISODate("2010-04-29T00:00:00.000Z"),
$lt: ISODate("2010-05-01T00:00:00.000Z")
}
})
=> { "_id" : ObjectId("4c0791e2b9ec877893f3363b"), "name" : "example", "created_at" : "Sun May 30 2010 00:00:00 GMT+0300 (EEST)" }
Based on my experiments you will need to serialize your dates into a format that MongoDB supports, because the following gave undesired search results.
items.save({
name: "example",
created_at: "Sun May 30 18.49:00 +0000 2010"
})
items.find({
created_at: {
$gte:"Mon May 30 18:47:00 +0000 2015",
$lt: "Sun May 30 20:40:36 +0000 2010"
}
})
=> { "_id" : ObjectId("4c079123b9ec877893f33638"), "name" : "example", "created_at" : "Sun May 30 18.49:00 +0000 2010" }
In the second example no results were expected, but there was still one gotten. This is because a basic string comparison is done.
To clarify. What is important to know is that:
Yes, you have to pass a Javascript Date object.
Yes, it has to be ISODate friendly
Yes, from my experience getting this to work, you need to manipulate the date to ISO
Yes, working with dates is generally always a tedious process, and mongo is no exception
Here is a working snippet of code, where we do a little bit of date manipulation to ensure Mongo (here i am using mongoose module and want results for rows whose date attribute is less than (before) the date given as myDate param) can handle it correctly:
var inputDate = new Date(myDate.toISOString());
MyModel.find({
'date': { $lte: inputDate }
})
Python and pymongo
Finding objects between two dates in Python with pymongo in collection posts (based on the tutorial):
from_date = datetime.datetime(2010, 12, 31, 12, 30, 30, 125000)
to_date = datetime.datetime(2011, 12, 31, 12, 30, 30, 125000)
for post in posts.find({"date": {"$gte": from_date, "$lt": to_date}}):
print(post)
Where {"$gte": from_date, "$lt": to_date} specifies the range in terms of datetime.datetime types.
db.collection.find({"createdDate":{$gte:new ISODate("2017-04-14T23:59:59Z"),$lte:new ISODate("2017-04-15T23:59:59Z")}}).count();
Replace collection with name of collection you want to execute query
MongoDB actually stores the millis of a date as an int(64), as prescribed by http://bsonspec.org/#/specification
However, it can get pretty confusing when you retrieve dates as the client driver will instantiate a date object with its own local timezone. The JavaScript driver in the mongo console will certainly do this.
So, if you care about your timezones, then make sure you know what it's supposed to be when you get it back. This shouldn't matter so much for the queries, as it will still equate to the same int(64), regardless of what timezone your date object is in (I hope). But I'd definitely make queries with actual date objects (not strings) and let the driver do its thing.
Use this code to find the record between two dates using $gte and $lt:
db.CollectionName.find({"whenCreated": {
'$gte': ISODate("2018-03-06T13:10:40.294Z"),
'$lt': ISODate("2018-05-06T13:10:40.294Z")
}});
Using with Moment.js and Comparison Query Operators
var today = moment().startOf('day');
// "2018-12-05T00:00:00.00
var tomorrow = moment(today).endOf('day');
// ("2018-12-05T23:59:59.999
Example.find(
{
// find in today
created: { '$gte': today, '$lte': tomorrow }
// Or greater than 5 days
// created: { $lt: moment().add(-5, 'days') },
}), function (err, docs) { ... });
db.collection.find({$and:
[
{date_time:{$gt:ISODate("2020-06-01T00:00:00.000Z")}},
{date_time:{$lt:ISODate("2020-06-30T00:00:00.000Z")}}
]
})
##In case you are making the query directly from your application ##
db.collection.find({$and:
[
{date_time:{$gt:"2020-06-01T00:00:00.000Z"}},
{date_time:{$lt:"2020-06-30T00:00:00.000Z"}}
]
})
You can also check this out. If you are using this method, then use the parse function to get values from Mongo Database:
db.getCollection('user').find({
createdOn: {
$gt: ISODate("2020-01-01T00:00:00.000Z"),
$lt: ISODate("2020-03-01T00:00:00.000Z")
}
})
Save created_at date in ISO Date Format then use $gte and $lte.
db.connection.find({
created_at: {
$gte: ISODate("2010-05-30T18:47:00.000Z"),
$lte: ISODate("2010-05-30T19:00:00.000Z")
}
})
use $gte and $lte to find between date data's in mongodb
var tomorrowDate = moment(new Date()).add(1, 'days').format("YYYY-MM-DD");
db.collection.find({"plannedDeliveryDate":{ $gte: new Date(tomorrowDate +"T00:00:00.000Z"),$lte: new Date(tomorrowDate + "T23:59:59.999Z")}})
mongoose.model('ModelName').aggregate([
{
$match: {
userId: mongoose.Types.ObjectId(userId)
}
},
{
$project: {
dataList: {
$filter: {
input: "$dataList",
as: "item",
cond: {
$and: [
{
$gte: [ "$$item.dateTime", new Date(`2017-01-01T00:00:00.000Z`) ]
},
{
$lte: [ "$$item.dateTime", new Date(`2019-12-01T00:00:00.000Z`) ]
},
]
}
}
}
}
}
])
For those using Make (formerly Integromat) and MongoDB:
I was struggling to find the right way to query all records between two dates. In the end, all I had to do was to remove ISODate as suggested in some of the solutions here.
So the full code would be:
"created": {
"$gte": "2016-01-01T00:00:00.000Z",
"$lt": "2017-01-01T00:00:00.000Z"
}
This article helped me achieve my goal.
UPDATE
Another way to achieve the above code in Make (formerly Integromat) would be to use the parseDate function. So the code below will return the same result as the one above :
"created": {
"$gte": "{{parseDate("2016-01-01"; "YYYY-MM-DD")}}",
"$lt": "{{parseDate("2017-01-01"; "YYYY-MM-DD")}}"
}
⚠️ Be sure to wrap {{parseDate("2017-01-01"; "YYYY-MM-DD")}} between quotation marks.
Convert your dates to GMT timezone as you're stuffing them into Mongo. That way there's never a timezone issue. Then just do the math on the twitter/timezone field when you pull the data back out for presentation.
Why not convert the string to an integer of the form YYYYMMDDHHMMSS? Each increment of time would then create a larger integer, and you can filter on the integers instead of worrying about converting to ISO time.
Scala:
With joda DateTime and BSON syntax (reactivemongo):
val queryDateRangeForOneField = (start: DateTime, end: DateTime) =>
BSONDocument(
"created_at" -> BSONDocument(
"$gte" -> BSONDateTime(start.millisOfDay().withMinimumValue().getMillis),
"$lte" -> BSONDateTime(end.millisOfDay().withMaximumValue().getMillis)),
)
where millisOfDay().withMinimumValue() for "2021-09-08T06:42:51.697Z" will be "2021-09-08T00:00:00.000Z"
and
where millisOfDay(). withMaximumValue() for "2021-09-08T06:42:51.697Z" will be "2021-09-08T23:59:99.999Z"
i tried in this model as per my requirements i need to store a date when ever a object is created later i want to retrieve all the records (documents ) between two dates
in my html file
i was using the following format mm/dd/yyyy
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<script>
//jquery
$(document).ready(function(){
$("#select_date").click(function() {
$.ajax({
type: "post",
url: "xxx",
datatype: "html",
data: $("#period").serialize(),
success: function(data){
alert(data);
} ,//success
}); //event triggered
});//ajax
});//jquery
</script>
<title></title>
</head>
<body>
<form id="period" name='period'>
from <input id="selecteddate" name="selecteddate1" type="text"> to
<input id="select_date" type="button" value="selected">
</form>
</body>
</html>
in my py (python) file i converted it into "iso fomate"
in following way
date_str1 = request.POST["SelectedDate1"]
SelectedDate1 = datetime.datetime.strptime(date_str1, '%m/%d/%Y').isoformat()
and saved in my dbmongo collection with "SelectedDate" as field in my collection
to retrieve data or documents between to 2 dates i used following query
db.collection.find( "SelectedDate": {'$gte': SelectedDate1,'$lt': SelectedDate2}})

Categories