Skip to main content

Posts

Showing posts from June, 2023

Automate! Automate!

 Making this customizable 👉So how about making our search user customizable? In the code below, I have: Asked the user to input an artist (line 14) Tidied up their input (line 15) formatted the search URL as an fString that includes the artist (line 19) Here's tAutomate! Automate! We are so close. I can taste it, folks! Massive kudos on getting this far! Today's lesson, however, will work best if you have one of Replit's paid for features (hacker plan or cycles). Free plan Repls 'fall asleep' after a while. Automation kinda relies on the Repl being always on. If you have hacker plan or you've bought some cycles, then you can enable always on in the drop down menu that appears when you click your Repl name (top left).he code: This is important because when our repl is always running, it can keep track of time and schedule events. 👉 I've set up a simple schedule that prints out a clock emoji every couple of seconds. It works like this: Import schedule librar

Web Scraping

 Web Scraping Some websites don't have lovely APIs for us to interface with. If we want data from these pages, we have to use a tecnique called scraping. This means downloading the whole webpage and poking at it until we can find the information we want. You're going to use scraping to get the top ten restaurants near you. Get started 👉 Go to a website like Yelp and search for the top 10 reastaurants in your location. Copy the URL.   url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+United+States"   Import libraries 👉 Import your libraries. Beautiful soup is a specialist library for extracting the contents of HTML and helping us parse them. Run the Repl once your imports are sorted because we want the Beautiful Soup library to be installed (it'll run quicker this way). import requests from bs4 import BeautifulSoup url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+Unite

API Smash Battle

 API Smash Battle That's right folks. Everyone loves a good mashup. So, today we're going to learn how to combine data from more than one API. NewsAPI 👉 Let's get signed up for our APIs. First up is News API. Hit 'Get API Key' Create an account Copy the API key Add it as a secret to your repl called newsapi Go to the get started guide Choose the Get current top articles option and grab the full URL for the GET request     Put it in your code as an fString. Replace the last bit with a {newsKey} variable (that will store the API key). We'll break it down further in a bit. url = f"https://newsapi.org/v2/top-headlines?country=us&apiKey={newsKey}" Import os Now import the os library and add the API key in using the code in the secrets menu.   import os newsKey = os.environ['newsapi'] url = f"https://newsapi.org/v2/top-headlines?country=us&apiKey={newsKey}"  Request time 👉 Now it's request time. I'm going to replace the ha

API? Spotify? Verify!

API? Spotify? Verify! The APIs we've been using so far are pretty unusual in that they provide their service for free. Normally, you have to pay to use an APIs data services (at least if you're doing so commercially). This means that you will need to verify your status as an approved user before you can get your grubby hands on all of that sweet, sweet data! Today, we're learning how to write a program that tells an API that we've got an account before accessing its info. Don't worry, you won't have to bust out the credit card. We're using a Spotify API that won't charge provided we keep our usage under a certain level. Get started 👉 Click here to go to the Spotify developer page and log in/create an account. 👉 Next, hit create app and give it a name and description. 👉 Copy the client ID and insert it as a secret in your REPL. Make sure to call it CLIENT_ID. Client Secret 👉 Back to Spotify and click show client secret (use your own, not the one in th

Funny, eh? Funny, how?

 Funny, eh? Funny, how? Dad Jokes The API we're using today is the awesome icanhazdadjoke. Go and check out their API documentation before continuing. Look at the endpoint to see the URL to access and the format of the data we'll get back. 👉 Here's the code to get a random dad joke and output it. NOTE - The second argument (headers=) in requests.get() is really important. It tells the code that we don't want the website back, we want JSON data in a specific format. Sometimes you need to do that. import requests, json result = requests.get("https://icanhazdadjoke.com/", headers={"Accept":"application/json"}) # get a random dad joke from the site endpoint and assign to a variable. The second argument (the header request) tells the script to return the json data as a string. joke = result.json() print(json.dumps(joke, indent=2)) 👉 I can change the print statement to just output the joke instead of the whole dictionary. print(joke["joke&

JSON

 JSON It's day 90, and today we're going to start learning how to use JSON (java script object notation - pronounced Jason) to get data from other websites. It's the first step on our journey to web scraping. JSON is a text based way of describing how a 2D dictionary might look. This is important when sending messages to other websites and getting a message back and decoding it. Most of the time, the message we get back will be in JSON format, and we need to interpret it in Python as a 2D dictionary to make sense of it. Go Get The Data 👉 Let's do a simple data grab from a free to use website - randomuser.me that generates some data about a fictional user. import requests # import the required library result = requests.get("https://randomuser.me/api/") # ask the site for data and store it in a variable print(result.json()) # interpret the data in the variable as json and print it. Run it. You'll get lots of data. Tidy it up 👉 Next, let's try to tidy t

Authentication Finesse

 Authentication Finesse So far, we've used Replit authentication as a bit of a bully. It's forced users to authenticate on every page. For a blog engine, this will probably put users off. We want them to be able to read your online literary genius without being turned off by having to create an account and login. Today is all about finessing the Replit authenticator so that it works in a more subtle way. Custom Buttons To start, I've turned on the authenticator from the files panel and then select or use a prebuilt login page. Make sure you do this before you write any code! 👉 This time though, I've clicked the use your own custom button link.  Now I've got some lovely code snippets to steal work with. Add an HTML template 👉 Next I add in a HTML template page where the button will appear. The page is called page.html and can be found in your file tree. Here's the code: <html>   <head>     <title>My Website</title>   </head>   <

Authentication

 Authentication Efficient code often has the drawback of being very hard to understand at first. It's often very dense, with lots of things happening on a single line of code. That's why teachers often teach what could be described as 'the long way round' when designing lessons on new topics. With all that in mind, don't be mad at us when we say that there is an easier way to create a login system than by using sessions. You've spent the past few days getting a really thorough grounding in what's going on behind the scenes, which was the whole point. No, really. We promise. Replit Authentication Here at Replit, we know that you will probably be using authentication a lot. So we've baked in the feature for you. 👉 Run your code, then head over to your left hand files pane and scroll until you see authentication. Then, erm, turn it on. That's it. Now you will see that your repl uses the default Replit login page. I can also access a bunch of informatio

HTTP & Sessions

 HTTP & Sessions One of the main protocols (rules that govern how computers communicate) on the web is called HTTP. HTTP is what is known as a stateless protocol. This means that it doesn't 'remember' things. It's a bit like having a conversation with a goldfish. You can ask a question and get a reply, but when you ask a follow up question, the original has already been forgotten, as has who you are and what you were talking about. So if HTTP is stateless, how come my news site remembers to give me the weather for my home town, my preferred South American river based online store tells me when it's time to order more multivitamins, and I'm justifiably proud of my #100days success streak? The answer is......... Sessions Sessions are a way of storing files on your computer that allows a website to keep a record of previous 'conversations' and 'questions' you've asked. By using sessions, we can store this info about the user to access later.

Client/Server Logins

 Client/Server Logins Waaay back when we learned about repl.db, we mentioned the idea of a client/server model for storing data in one place and dishing it out to multiple users. This model is the way we overcome the issue with repl.db of each user getting their own copy of the database. Well, now we can use Flask as a webserver. We can build this client server model to persistently store data in the repl (the server) and have it be accessed by multiple users who access the website via the URL (the clients). Get Started Previously, we have built login systems using Flask & HTML. We're going to start with one of those systems and adapt it to use a dictionary instead. 👉 First, let's remind ourselves of the way the system works. Here's the Flask code. Read the comments for explanations of what it does: from flask import Flask, request, redirect # imports request and redirect as well as flask app = Flask(__name__, static_url_path='/static') # path to the static fil

Don't Stop 'Til You Get

 Don't Stop 'Til You Get Today, we're going to learn about an alternative way of getting data from forms to the webserver. So far, we've used post, which (kinda) packages up all the data from the form and sends it to the server. We can think of this as the form controlling when the data is sent. With the get method, the request for the data comes from the webserver. It effectively says gimme that data to the form. You've probably seen get in use before. If you've ever seen a URL with a ? after the website name, then a bunch of = and maybe & symbols, then that website is using get. So What's The Difference? I'm glad you asked! With post, the data in the form can't be seen by your web browser. Once it's sent, it's gone. This means that you can't bookmark or share a URL based on post data because it will be different for each user. Ever tried to drop those SO subtle present hints by sharing a shopping cart link? Only to get a link that d