Skip to main content

API Smash Battle

 API Smash Battle

That's right folks. Everyone loves a good mashup. So, today we're going to learn how to combine data from more than one API.


NewsAPI

👉 Let's get signed up for our APIs. First up is News API.


Hit 'Get API Key'

Create an account

Copy the API key

Add it as a secret to your repl called newsapi

Go to the get started guide

Choose the Get current top articles option and grab the full URL for the GET request  


 

Put it in your code as an fString. Replace the last bit with a {newsKey} variable (that will store the API key). We'll break it down further in a bit.
url = f"https://newsapi.org/v2/top-headlines?country=us&apiKey={newsKey}"

Import os
Now import the os library and add the API key in using the code in the secrets menu.  

import os

newsKey = os.environ['newsapi']

url = f"https://newsapi.org/v2/top-headlines?country=us&apiKey={newsKey}" 

Request time
👉 Now it's request time. I'm going to replace the hard coded 'country code' in the url with a variable to make it customizable later on.  

import os

newsKey = os.environ['newsapi']
country = "us"

url = f"https://newsapi.org/v2/top-headlines?country={country}&apiKey={newsKey}"  

👉 Next, create the request, send it the url, format the data returned as json and print it out so we can see what we get back.  

import requests, json, os

newsKey = os.environ['newsapi']
country = "us"

url = f"https://newsapi.org/v2/top-headlines?country={country}&apiKey={newsKey}"

##### The new bit ###################
result = requests.get(url)
data = result.json()
print(json.dumps(data, indent=2))  

👉 Inspecting the data returned tells us what information we can extract. I'm going to print out just the titles, URL links, and content by using a loop.

For this, I've commented out the print(json.dumps(data, indent=2)) line so that it doesn't output everything. This is useful for testing, so I've just made it a comment instead of deleting it completely. 

import requests, json, os

newsKey = os.environ['newsapi']
country = "us"

url = f"https://newsapi.org/v2/top-headlines?country={country}&apiKey={newsKey}"

result = requests.get(url)
data = result.json()
# print(json.dumps(data, indent=2))


##### The new bit #####################
for article in data['articles']:
  print(article['title'])
  print(article['url'])
  print(article['content'])  

Sometimes, however, the content isn't great. See below: 

OpenAI
The second half of our mashup uses openAI

Again, you'll need to create an account, verify your email, and validate with your mobile number.

You do only get a certain amount of use out of the free trial. So if you use this a lot then it will run out.

OpenAI allows us to tell a computer to do something in plain text.

👉 There are quite a few steps involved in setup, so read these next instructions carefully.

Go to your profile and click view API Keys
Create a new secret key
Copy the API key
Add it as a secret to your repl called openai
Now get your organization ID (from the 'settings' menu on the left of the 'View API Keys' screen).
Make a new secret called organizationID for this.
Next, move all of your previous code into a new file called 'news.py'
Bring your 'openai' secrets into main.py  

import requests, json, os

openai = os.environ['openai']
orgid = os.environ['organizationID']  


Go to the documentation menu on the website and scroll down until you find authentication
Copy the Example with the openai Python package code:  

 

Paste the code from openai into main.py and combine it with the secrets to remove the extra os import and to take advantage of Replit not needing to install libraries.

import requests, json, os
import openai
openai.organization = os.environ['organizationID']
openai.api_key = os.environ['openai']
openai.Model.list() 


Completion
👉 The openai API Reference page shows me lots of things that I can do. I'm going to start with a completion. This example should talk to openai and say 'this is a test'. I've printed the response to show that happening. 

response = openai.Completion.create(model="text-davinci-002", prompt="Say this is a test", temperature=0, max_tokens=6)

print(response) 


When you run this, you'll see that it comes back in a top level of 'choices' and second level 'text'.

👉 Let's manipulate it to see if we can get it to do something else.

I've replaced the hard coded prompt with a variable.  

prompt = "Who is the most handsome bald man?"

response = openai.Completion.create(model="text-davinci-002", prompt=prompt, temperature=0, max_tokens=6)

print(response) 

Unfortunately, openai has no definitive answer for this question!  



👉 To just output the text (and strip out the odd white space) I've changed the print to this:

print(response["choices"][0]["text"].strip())  


Whole code  

import requests, json, os
import openai

openai.organization = os.environ['organizationID']
openai.api_key = os.environ['openai']
openai.Model.list()

prompt = "Who is the most handsome bald man?"

response = openai.Completion.create(model="text-davinci-002", prompt=prompt, temperature=0, max_tokens=6)

print(response["choices"][0]["text"].strip()) 






Comments

Popular posts from this blog

Web Scraping

 Web Scraping Some websites don't have lovely APIs for us to interface with. If we want data from these pages, we have to use a tecnique called scraping. This means downloading the whole webpage and poking at it until we can find the information we want. You're going to use scraping to get the top ten restaurants near you. Get started 👉 Go to a website like Yelp and search for the top 10 reastaurants in your location. Copy the URL.   url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+United+States"   Import libraries 👉 Import your libraries. Beautiful soup is a specialist library for extracting the contents of HTML and helping us parse them. Run the Repl once your imports are sorted because we want the Beautiful Soup library to be installed (it'll run quicker this way). import requests from bs4 import BeautifulSoup url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+Unite...

HTTP & Sessions

 HTTP & Sessions One of the main protocols (rules that govern how computers communicate) on the web is called HTTP. HTTP is what is known as a stateless protocol. This means that it doesn't 'remember' things. It's a bit like having a conversation with a goldfish. You can ask a question and get a reply, but when you ask a follow up question, the original has already been forgotten, as has who you are and what you were talking about. So if HTTP is stateless, how come my news site remembers to give me the weather for my home town, my preferred South American river based online store tells me when it's time to order more multivitamins, and I'm justifiably proud of my #100days success streak? The answer is......... Sessions Sessions are a way of storing files on your computer that allows a website to keep a record of previous 'conversations' and 'questions' you've asked. By using sessions, we can store this info about the user to access later....

Client/Server Logins

 Client/Server Logins Waaay back when we learned about repl.db, we mentioned the idea of a client/server model for storing data in one place and dishing it out to multiple users. This model is the way we overcome the issue with repl.db of each user getting their own copy of the database. Well, now we can use Flask as a webserver. We can build this client server model to persistently store data in the repl (the server) and have it be accessed by multiple users who access the website via the URL (the clients). Get Started Previously, we have built login systems using Flask & HTML. We're going to start with one of those systems and adapt it to use a dictionary instead. 👉 First, let's remind ourselves of the way the system works. Here's the Flask code. Read the comments for explanations of what it does: from flask import Flask, request, redirect # imports request and redirect as well as flask app = Flask(__name__, static_url_path='/static') # path to the static fil...