Skip to main content

API Smash Battle

 API Smash Battle

That's right folks. Everyone loves a good mashup. So, today we're going to learn how to combine data from more than one API.


NewsAPI

👉 Let's get signed up for our APIs. First up is News API.


Hit 'Get API Key'

Create an account

Copy the API key

Add it as a secret to your repl called newsapi

Go to the get started guide

Choose the Get current top articles option and grab the full URL for the GET request  


 

Put it in your code as an fString. Replace the last bit with a {newsKey} variable (that will store the API key). We'll break it down further in a bit.
url = f"https://newsapi.org/v2/top-headlines?country=us&apiKey={newsKey}"

Import os
Now import the os library and add the API key in using the code in the secrets menu.  

import os

newsKey = os.environ['newsapi']

url = f"https://newsapi.org/v2/top-headlines?country=us&apiKey={newsKey}" 

Request time
👉 Now it's request time. I'm going to replace the hard coded 'country code' in the url with a variable to make it customizable later on.  

import os

newsKey = os.environ['newsapi']
country = "us"

url = f"https://newsapi.org/v2/top-headlines?country={country}&apiKey={newsKey}"  

👉 Next, create the request, send it the url, format the data returned as json and print it out so we can see what we get back.  

import requests, json, os

newsKey = os.environ['newsapi']
country = "us"

url = f"https://newsapi.org/v2/top-headlines?country={country}&apiKey={newsKey}"

##### The new bit ###################
result = requests.get(url)
data = result.json()
print(json.dumps(data, indent=2))  

👉 Inspecting the data returned tells us what information we can extract. I'm going to print out just the titles, URL links, and content by using a loop.

For this, I've commented out the print(json.dumps(data, indent=2)) line so that it doesn't output everything. This is useful for testing, so I've just made it a comment instead of deleting it completely. 

import requests, json, os

newsKey = os.environ['newsapi']
country = "us"

url = f"https://newsapi.org/v2/top-headlines?country={country}&apiKey={newsKey}"

result = requests.get(url)
data = result.json()
# print(json.dumps(data, indent=2))


##### The new bit #####################
for article in data['articles']:
  print(article['title'])
  print(article['url'])
  print(article['content'])  

Sometimes, however, the content isn't great. See below: 

OpenAI
The second half of our mashup uses openAI

Again, you'll need to create an account, verify your email, and validate with your mobile number.

You do only get a certain amount of use out of the free trial. So if you use this a lot then it will run out.

OpenAI allows us to tell a computer to do something in plain text.

👉 There are quite a few steps involved in setup, so read these next instructions carefully.

Go to your profile and click view API Keys
Create a new secret key
Copy the API key
Add it as a secret to your repl called openai
Now get your organization ID (from the 'settings' menu on the left of the 'View API Keys' screen).
Make a new secret called organizationID for this.
Next, move all of your previous code into a new file called 'news.py'
Bring your 'openai' secrets into main.py  

import requests, json, os

openai = os.environ['openai']
orgid = os.environ['organizationID']  


Go to the documentation menu on the website and scroll down until you find authentication
Copy the Example with the openai Python package code:  

 

Paste the code from openai into main.py and combine it with the secrets to remove the extra os import and to take advantage of Replit not needing to install libraries.

import requests, json, os
import openai
openai.organization = os.environ['organizationID']
openai.api_key = os.environ['openai']
openai.Model.list() 


Completion
👉 The openai API Reference page shows me lots of things that I can do. I'm going to start with a completion. This example should talk to openai and say 'this is a test'. I've printed the response to show that happening. 

response = openai.Completion.create(model="text-davinci-002", prompt="Say this is a test", temperature=0, max_tokens=6)

print(response) 


When you run this, you'll see that it comes back in a top level of 'choices' and second level 'text'.

👉 Let's manipulate it to see if we can get it to do something else.

I've replaced the hard coded prompt with a variable.  

prompt = "Who is the most handsome bald man?"

response = openai.Completion.create(model="text-davinci-002", prompt=prompt, temperature=0, max_tokens=6)

print(response) 

Unfortunately, openai has no definitive answer for this question!  



👉 To just output the text (and strip out the odd white space) I've changed the print to this:

print(response["choices"][0]["text"].strip())  


Whole code  

import requests, json, os
import openai

openai.organization = os.environ['organizationID']
openai.api_key = os.environ['openai']
openai.Model.list()

prompt = "Who is the most handsome bald man?"

response = openai.Completion.create(model="text-davinci-002", prompt=prompt, temperature=0, max_tokens=6)

print(response["choices"][0]["text"].strip()) 






Comments

Popular posts from this blog

HTML , Tags , Body , Headings , Paragraphs , Images , Bullets , Linky ,

 Hyper Text Markup Language Over the next couple of days, we'll be taking a crash course in HTML (Hyper Text Markup Language). HTML is a markdown language. This means that it is used to tell webpages how to render on screen (basically how to look). It is made up of a series of instructions in <tags> that surround text/image filenames, etc. and influence how they are displayed on screen.    Tags Now let's start creating a webpage and learning about the tags. 👉 Step 1 is to tell the file that this is an HTML page. These are the first and last tags on your page. Notice that the last tag has a forward slash before the command. This means close or end this tag. With a few exceptions, tags come in pairs - an opening tag (no /) and a closing tag (with a /).  <html>    </html>  Head The <head> tags contain a lot of invisible information about the page that you won't see on screen. Stuff like: How to display your webpage on different de...

Automate! Automate!

 Making this customizable 👉So how about making our search user customizable? In the code below, I have: Asked the user to input an artist (line 14) Tidied up their input (line 15) formatted the search URL as an fString that includes the artist (line 19) Here's tAutomate! Automate! We are so close. I can taste it, folks! Massive kudos on getting this far! Today's lesson, however, will work best if you have one of Replit's paid for features (hacker plan or cycles). Free plan Repls 'fall asleep' after a while. Automation kinda relies on the Repl being always on. If you have hacker plan or you've bought some cycles, then you can enable always on in the drop down menu that appears when you click your Repl name (top left).he code: This is important because when our repl is always running, it can keep track of time and schedule events. 👉 I've set up a simple schedule that prints out a clock emoji every couple of seconds. It works like this: Import schedule librar...

Web Scraping

 Web Scraping Some websites don't have lovely APIs for us to interface with. If we want data from these pages, we have to use a tecnique called scraping. This means downloading the whole webpage and poking at it until we can find the information we want. You're going to use scraping to get the top ten restaurants near you. Get started 👉 Go to a website like Yelp and search for the top 10 reastaurants in your location. Copy the URL.   url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+United+States"   Import libraries 👉 Import your libraries. Beautiful soup is a specialist library for extracting the contents of HTML and helping us parse them. Run the Repl once your imports are sorted because we want the Beautiful Soup library to be installed (it'll run quicker this way). import requests from bs4 import BeautifulSoup url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+Unite...