CSCI 4360/6360: Data Science II
Literally TODAY
Python as a language was implemented from the start by Guido van Rossum. What was originally something of a snarkily-named hobby project to pass the holidays turned into a huge open source phenomenon used by millions.
The original project began in 1989.
Release of Python 2.0 in 2000
Release of Python 3.0 in 2008
Python 2.0 was EOL (end-of-life) in 2020, so you should be on Python 3 now. The latest version of Python is 3.11, with 3.12 due out later this year.
You're welcome to use whatever version you want, just be aware: the AutoLab autograders will be using 3.10.x (in general, anything 3.8 and above should be fine).
Python is an intepreted language.
Python is a very general language.
Instead, as Jake VanderPlas put it:
"Python syntax is the glue that holds your data science code together. As many scientists and statisticians have found, Python excels in that role because it is powerful, intuitive, quick to write, fun to use, and above all extremely useful in day-to-day data science tasks."
The most basic thing possible: Hello, World!
print("Hello, world!")
Hello, world!
Yep, that's all that's needed!
(Take note: the biggest different between Python 2 and 3 is the print
function: it technically wasn't a function in Python 2 so much as a language construct, and so you didn't need parentheses around the string you wanted printed; in Python 3, it's a full-fledged function, and therefore requires parentheses)
Python is dynamically-typed, meaning you don't have to declare types when you assign variables. Python is also duck-typed, a colloquialism that means it infers the best-suited variable type at runtime ("if it walks like a duck and quacks like a duck...")
x = 5
type(x)
int
y = 5.5
type(y)
float
It's important to note: even though you don't have to specify a type, Python still assigns a type to variables. It would behoove you to know the types so you don't run into tricky type-related bugs!
x = 5 * 5
What's the type for x
?
type(x)
int
y = 5 / 5
What's the type for y
?
type(y)
float
There are functions you can use to explicitly cast a variable from one type to another:
x = 5 / 5
type(x)
float
y = int(x)
type(y)
int
z = str(y)
type(z)
str
Introduced in Python 3.8 (still relatively recent!), the walrus operator, or "Assignment Expression", is a way of assigning a value to a variable in mid-expression.
Other languages have had this ability for awhile, but there was heated debate about how best to implement this in Python.
Normally, you'd first assign a value to a variable, and then perform a check.
v = 10
if (v > 5):
print("We've reached this location")
We've reached this location
Alternatively, you can perform the assignment right in the middle of the expression.
if ((v := 10) > 5):
print("We've now reached this location")
We've now reached this location
Note two important changes.
Just as important to note: anywhere that you could use the walrus operator, you can also break it apart into two separate expressions! This is purely for brevity, but never sacrifice clarity for it.
There are four main types of built-in Python data structures, each similar but ever-so-slightly different:
(Note: generators and comprehensions are worthy of mention; definitely look into these as well)
Lists are basically your catch-all multi-element data structure; they can hold anything.
some_list = [1, 2, 'something', 6.2, ["another", "list!"], 7371]
print(some_list[3])
type(some_list)
6.2
list
Tuples are like lists, except they're immutable once you've built them (and denoted by parentheses, instead of brackets).
some_tuple = (1, 2, 'something', 6.2, ["another", "list!"], 7371)
print(some_tuple[5])
type(some_tuple)
7371
tuple
Sets are probably the most different: they are mutable (can be changed), but are unordered and can only contain unique items (they automatically drop duplicates you try to add). They are denoted by braces.
some_set = {1, 1, 1, 1, 1, 86, "something", 73}
some_set.add(1)
print(some_set)
type(some_set)
{73, 'something', 1, 86}
set
Finally, dictionaries. Other terms that may be more familiar include: maps, hashmaps, or associative arrays. They're a combination of sets (for their key mechanism) and lists (for their value mechanism).
some_dict = {"key": "value", "another_key": [1, 3, 4], 3: ["this", "value"]}
print(some_dict["another_key"])
type(some_dict)
[1, 3, 4]
dict
Dictionaries explicitly set up a mapping between a key--keys are unique and unordered, exactly like sets--to values, which are an arbitrary list of items. These are very powerful structures for data science-y applications.
Ordered data structures in Python are 0-indexed (like C, C++, and Java). This means the first elements are at index 0:
print(some_list)
[1, 2, 'something', 6.2, ['another', 'list!'], 7371]
index = 0
print(some_list[index])
1
However, using colon notation, you can "slice out" entire sections of ordered structures.
start = 0
end = 3
print(some_list[start : end])
Note that the starting index is inclusive, but the ending index is exclusive. Also, if you omit the starting index, Python assumes you mean 0 (start at the beginning); likewise, if you omit the ending index, Python assumes you mean "go to the very end".
print(some_list[:end])
[1, 2, 'something']
start = 1
print(some_list[start:])
[2, 'something', 6.2, ['another', 'list!'], 7371]
Python supports two kinds of loops: for
and while
for
loops in Python are, in practice, closer to for each loops in other languages: they iterate through collections of items, rather than incrementing indices.
for item in some_list:
print(item)
1 2 something 6.2 ['another', 'list!'] 7371
some_list
)for
statement (item
)print(item)
)But if you need to iterate by index, check out the enumerate
function:
for index, item in enumerate(some_list):
print("{}: {}".format(index, item))
0: 1 1: 2 2: something 3: 6.2 4: ['another', 'list!'] 5: 7371
while
loops operate as you've probably come to expect: there is some associated boolean condition, and as long as that condition remains True
, the loop will keep happening.
i = 0
while i < 10:
print(i)
i += 2
0 2 4 6 8
IMPORTANT: Do not forget to perform the update step in the body of the while
loop! After using for
loops, it's easy to become complacent and think that Python will update things automatically for you. If you forget that critical i += 2
line in the loop body, this loop will go on forever...
Another cool looping utility when you have multiple collections of identical length you want to loop through simultaneously: the zip()
function
list1 = [1, 2, 3]
list2 = [4, 5, 6]
list3 = [7, 8, 9]
for x, y, z in zip(list1, list2, list3):
print("{} {} {}".format(x, y, z))
1 4 7 2 5 8 3 6 9
This "zips" together the lists and picks corresponding elements from each for every loop iteration. Way easier than trying to set up a numerical index to loop through all three simultaneously, but you can even combine this with enumerate
to do exactly that:
for index, (x, y, z) in enumerate(zip(list1, list2, list3)):
print("{}: ({}, {}, {})".format(index, x, y, z))
0: (1, 4, 7) 1: (2, 5, 8) 2: (3, 6, 9)
Conditionals, or if
statements, allow you to branch the execution of your code depending on certain circumstances.
In Python, this entails three keywords: if
, elif
, and else
.
grade = 82
if grade > 90:
print("A")
elif grade > 80:
print("B")
else:
print("Something else")
B
A couple important differences from C/C++/Java parlance:
else if
" or "elseif
", just "elif
". It's admittedly weird, but it's PythonConditionals, when used with loops, offer a powerful way of slightly tweaking loop behavior with two keywords: continue
and break
.
The former is used when you want to skip an iteration of the loop, but nonetheless keep going on to the next iteration.
list_of_data = [4.4, 1.2, 6898.32, "bad data!", 5289.24, 25.1, "other bad data!", 52.4]
for x in list_of_data:
if type(x) == str:
continue
# This stuff gets skipped anytime the "continue" is run
print(x)
4.4 1.2 6898.32 5289.24 25.1 52.4
break
, on the other hand, literally slams the brakes on a loop, pulling you out one level of indentation immediately.
import random
i = 0
iters = 0
while True:
iters += 1
i += random.randint(0, 10)
if i > 1000:
break
print(iters)
189
def http_error(status):
match status:
case 400:
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
# If an exact match is not confirmed, this last case will be used if provided
case _:
return "Something's wrong with the internet"
http_error(404)
'Not found'
http_error(502)
"Something's wrong with the internet"
These can get really complex, so I'd recommend checking out the PEP documentation.
Python has a great file I/O library. There are usually third-party libraries that expedite reading certain often-used formats (JSON, XML, binary formats, etc), but you should still be familiar with input/output handles and how they work:
text_to_write = "I want to save this to a file."
f = open("some_file.txt", "w")
f.write(text_to_write)
f.close()
This code writes the string on the first line to a file named some_file.txt
. We can read it back:
f = open("some_file.txt", "r")
from_file = f.read()
f.close()
print(from_file)
I want to save this to a file.
Take note what changed: when writing, we used a "w"
character in the open
argument, but when reading we used "r"
. Hopefully this is easy to remember.
Also, when reading/writing binary files, you have to include a "b": "rb"
or "wb"
.
A core tenet in writing functions is that functions should do one thing, and do it well.
Writing good functions makes code much easier to troubleshoot and debug, as the code is already logically separated into components that perform very specific tasks. Thus, if your application is breaking, you usually have a good idea where to start looking.
WARNING: It's very easy to get caught up writing "god functions": one or two massive functions that essentially do everything you need your program to do. But if something breaks, this design is very difficult to debug.
Homework assignments will often require you to break your code into functions so different portions can be autograded.
Functions have a header definition and a body:
def some_function(): # This line is the header
pass # Everything after (that's indented) is the body
This function doesn't do anything, but it's perfectly valid. We can call it:
some_function()
Not terribly interesting, but a good outline. To make it interesting, we should add input arguments and return values:
def vector_magnitude(vector):
d = 0.0
for x in vector:
d += x ** 2
return d ** 0.5
v1 = [1, 1]
d1 = vector_magnitude(v1)
print(d1)
1.4142135623730951
v2 = [53.3, 13.4]
d2 = vector_magnitude(v2)
print(d2)
54.95862079783298
If you looked at our previous vector_magnitude
function and thought "there must be an easier way to do this", then you were correct: that easier way is NumPy arrays.
NumPy arrays are the result of taking Python lists and adding a ton of back-end C++ code to make them really efficient.
Two areas where they excel: vectorized programming and fancy indexing.
Vectorized programming is perfectly demonstrated with our previous vector_magnitude
function: since we're performing the same operation on every element of the vector, NumPy allows us to build code that implicitly handles the loop
import numpy as np
def vectorized_magnitude(vector):
return (vector ** 2).sum() ** 0.5
v1 = np.array([1, 1])
d1 = vectorized_magnitude(v1)
print(d1)
1.4142135623730951
v2 = np.array([53.3, 13.4])
d2 = vectorized_magnitude(v2)
print(d2)
54.95862079783298
We've also seen indexing and slicing before; here, however, NumPy really shines.
Let's say we have some super high-dimensional data:
X = np.random.random((500, 600, 250))
We can take statistics of any dimension or slice we want:
X[:400, 100:200, 0].mean()
0.5013505307634563
X[X < 0.01].std()
0.0028899911733802374
X[:400, 100:200, 0].mean(axis = 1)
array([0.49737109, 0.44833888, 0.53074097, 0.5404097 , 0.52160002, 0.50058461, 0.49908126, 0.52287127, 0.45241352, 0.46703525, 0.4702419 , 0.49268445, 0.46844227, 0.48391679, 0.49762409, 0.4827253 , 0.5275639 , 0.48179628, 0.5115813 , 0.46846101, 0.52686256, 0.47377118, 0.45549623, 0.55238457, 0.51208043, 0.5439931 , 0.53773549, 0.55105237, 0.52658243, 0.50401722, 0.49478652, 0.48737685, 0.50726238, 0.48850417, 0.49009428, 0.50215099, 0.53412587, 0.51007422, 0.53298628, 0.54198429, 0.50907545, 0.53567436, 0.46271191, 0.49393759, 0.53957797, 0.47511141, 0.48205472, 0.5179142 , 0.51488003, 0.51910924, 0.54738442, 0.47558906, 0.46316644, 0.54077576, 0.46958899, 0.50055118, 0.47418809, 0.49932718, 0.49781138, 0.50483368, 0.4745793 , 0.46948913, 0.5307997 , 0.48325465, 0.48058507, 0.52951451, 0.51811948, 0.48607389, 0.483468 , 0.54293029, 0.48583379, 0.47547978, 0.45212456, 0.51652618, 0.50072943, 0.4538937 , 0.53271037, 0.48273943, 0.5001905 , 0.4648039 , 0.53423144, 0.51230755, 0.49024373, 0.43016267, 0.51933167, 0.51252982, 0.47118743, 0.48196135, 0.5184882 , 0.48825197, 0.53464973, 0.46840786, 0.52284555, 0.52470496, 0.49584985, 0.43871637, 0.47824921, 0.47939269, 0.4836294 , 0.50558048, 0.54600936, 0.57308808, 0.49330683, 0.49558542, 0.46093158, 0.47206076, 0.52951697, 0.44846593, 0.52642534, 0.48543959, 0.51549863, 0.5162561 , 0.53118709, 0.50142855, 0.5186565 , 0.50760642, 0.49425148, 0.49745986, 0.51265424, 0.46933519, 0.59055722, 0.44705179, 0.46384888, 0.43304744, 0.49532885, 0.4985809 , 0.50737497, 0.52058429, 0.46909275, 0.50945519, 0.56542211, 0.4703431 , 0.49881781, 0.5228256 , 0.52389025, 0.53337051, 0.52535748, 0.49949692, 0.54927042, 0.47181017, 0.4975642 , 0.55110052, 0.56407162, 0.49836034, 0.54601596, 0.49585502, 0.45184795, 0.48791399, 0.5140383 , 0.508359 , 0.48430171, 0.52087991, 0.50180534, 0.54372462, 0.47520442, 0.50047329, 0.46896134, 0.4838005 , 0.48812171, 0.53469617, 0.51246325, 0.50815352, 0.49507233, 0.48140089, 0.47409589, 0.52797811, 0.53228631, 0.45782697, 0.55601616, 0.46957955, 0.50257427, 0.48958297, 0.51040046, 0.43756227, 0.55859115, 0.53873089, 0.49009568, 0.50570152, 0.49382355, 0.50831151, 0.51820969, 0.51850368, 0.503994 , 0.51454209, 0.49756739, 0.51457992, 0.46802667, 0.51406513, 0.51215761, 0.5116316 , 0.45874991, 0.52583975, 0.54511528, 0.52123809, 0.51033092, 0.46854345, 0.49157846, 0.53334455, 0.53401426, 0.52429142, 0.48582693, 0.52910923, 0.49441126, 0.51994383, 0.48324811, 0.48431128, 0.45063325, 0.43010146, 0.47916326, 0.50546802, 0.50949691, 0.52519148, 0.48405423, 0.48180851, 0.50517383, 0.51325571, 0.56510647, 0.4882109 , 0.48055001, 0.50844156, 0.50289553, 0.4966235 , 0.48028466, 0.48159151, 0.48626421, 0.47500939, 0.46753802, 0.47752359, 0.49927895, 0.49453758, 0.46389919, 0.48616089, 0.46313849, 0.48001945, 0.47873384, 0.48528502, 0.48161713, 0.45363577, 0.44586414, 0.50704742, 0.4439203 , 0.51346245, 0.46763809, 0.51946165, 0.51129226, 0.51068651, 0.52393219, 0.47471263, 0.53482613, 0.51938651, 0.499812 , 0.50947437, 0.49787275, 0.49682345, 0.51044396, 0.51929041, 0.47030116, 0.51962987, 0.52900454, 0.52124651, 0.48831797, 0.48239737, 0.53905387, 0.55360035, 0.52386613, 0.5034439 , 0.54310918, 0.53455163, 0.48101006, 0.51129121, 0.54404979, 0.51853201, 0.41722462, 0.5203053 , 0.54621043, 0.49003244, 0.45384039, 0.54523629, 0.54403214, 0.53980835, 0.50465403, 0.50197236, 0.47291714, 0.53923104, 0.5296921 , 0.51868835, 0.51618552, 0.53329106, 0.49405428, 0.45376982, 0.49682037, 0.46189478, 0.50816486, 0.46576169, 0.47738568, 0.44381498, 0.49200433, 0.49037651, 0.50458809, 0.48005126, 0.50278224, 0.52128436, 0.51590559, 0.47922237, 0.53569256, 0.52070628, 0.46469845, 0.52344516, 0.49773715, 0.5017328 , 0.50062037, 0.53939912, 0.5092521 , 0.53473676, 0.49906289, 0.47865432, 0.51050057, 0.46768765, 0.51780078, 0.53259041, 0.50487584, 0.51246334, 0.53079096, 0.47930716, 0.55631563, 0.5089326 , 0.49021699, 0.50986343, 0.49995188, 0.55083283, 0.52601157, 0.49754341, 0.52024457, 0.537142 , 0.51064445, 0.42674785, 0.53213867, 0.58708079, 0.50850316, 0.48232517, 0.44369636, 0.50139342, 0.48892374, 0.50489974, 0.4670145 , 0.50821176, 0.52475811, 0.47759895, 0.52373717, 0.48928108, 0.51469815, 0.51549637, 0.53775838, 0.46978345, 0.48927354, 0.52657933, 0.5463251 , 0.48356279, 0.52462863, 0.56622595, 0.51293404, 0.52076054, 0.44148381, 0.53909731, 0.44653481, 0.47340657, 0.51870772, 0.47338794, 0.49861083, 0.45611151, 0.49828897, 0.48617074, 0.41765987, 0.53089131, 0.45945148, 0.51661784, 0.49742595, 0.51395135, 0.44446108, 0.5634507 , 0.52138998, 0.52386869, 0.499193 , 0.4732365 , 0.51179387, 0.4622194 , 0.48324348, 0.46655493, 0.54058051, 0.48663859, 0.47627812, 0.50317899, 0.51577501, 0.48915163, 0.49995811, 0.48229013, 0.52035159, 0.52508047, 0.51432458, 0.4565121 ])
We'll end our Python crash-course with a bit of a review from 3360 or your previous intro-to-ML experience: document classification with Naive Bayes and Logistic Regression.
Hopefully you're familiar with this abstraction for modeling documents.
This model assumes that each word in a document is drawn independently from a multinomial distribution over possible words (a multinomial distribution is a generalization of a Bernoulli distribution to multiple values). Although this model ignores the ordering of words in a document, it works surprisingly well for a number of tasks, including classification.
In short, it says: word order doesn't matter nearly as much--or perhaps, at all--as word frequency.
With any (discriminative) classification problem, you're asking: what's the probability of a label given the data? In our document classification example, this question is: what is the probability of the document class, given the document itself?
Formally, for a document $x$ and label $y$: $P(y | x)$
If we're using individual word counts as features ($x_1$ is word 1, $x_2$ is word 2, and so on), then by the rules of conditional probability, this probability would expand into something like this:
$$ P(y | x_1, x_2, ..., x_n) = \frac{P(y)P(x_1, x_2, ..., x_n)}{P(x_1, x_2, ..., x_n)} $$
This is, for all practical purposes, intractable. Hence, "naive": we make each word conditionally independent of the others, given the label:
$$ P(x_i | y, x_1, x_2, ..., x_i, x_{i + 1}, ..., x_n) = P(x_i | y) $$
For any given word $x_i$ then, the original problem reduces to:
$$ P(y | x_1, x_2, ..., x_n) = \frac{P(y) \Pi_{i = 1}^n P(x_i | y)}{P(x_1, x_2, ..., x_n)} $$
And since the denominator is the same across all documents, we can effectively ignore it as a constant, thereby giving us a decision:
$$ \hat{y} = \textrm{argmax}_y P(y) \Pi_{i = 1}^n P(x_i | y) $$
If you really want to dig into what makes Naive Bayes an improvement over the "optimal Bayes classifier", you can count exactly how many parameters are required in either case.
We'll take the simple example: the decision variable $Y$ is boolean, and the observations $X$ have $n$ attributes, each of which is also boolean. Formally, that looks like this:
$$ \theta_{ij} = P(X = x_i | Y = y_i) $$
where $i$ takes on $2^n$ possible values (one for each of the possible combinations of boolean values in the array $X$, and $j$ takes on 2 possible values (true or false). For any fixed $j$ the sum over $i$ of $\theta_{ij}$ has to be 1 (probability). So for any particular $y_j$, you have the $2^{n}$ values of $x_i$, so you need $2^n - 1$ parameters. Given two possible values for $j$ (since $Y$ is boolean!), we must estimate a total of $2 (2^n - 1)$ such $\theta_{ij}$ parameters.
This is a problem!
This means that, if our observations $X$ have three attributes--3-dimensional data--we need 14 distinct data points at least, one for each possible boolean combination of attributes in $X$ and label $Y$. It gets exponentially worse as the number of boolean attributes increases--if $X$ has 30 boolean attributes, we'll have to estimate 30 billion parameters.
This is why the conditional independence assumption of Naive Bayes is so critical: more than anything, it substantially reduces the number of required estimated parameters. If, through conditional independence, we have
$$ P(X_1, X_2, ..., X_n | Y) = \Pi_{i = 1}^n P(X_i | Y) $$
or, to illustrate more concretely, observations $X$ with 3 attributes each
$$ P(X_1, X_2, X_3 | Y) = P(X_1 | Y) P(X_2 | Y) P(X_3 | Y) $$
we've just gone from requiring the aforementioned 14 parameters, to 6!
Formally: we've gone from requiring $2(2^n - 1)$ parameters to $2n$.
Naive Bayes is a fantastic algorithm and works well in practice. However, it has some important drawbacks to be aware of:
Logistic regression is a bit different. Rather than estimating the parametric form of the data $P(x_i | y)$ and $P(y)$ in order to get to the posterior $P(y | x)$, here we're learning the decision boundary $P(y | x)$ directly.
Ideally we want some kind of output function between 0 and 1--so let's just go with the logit
%matplotlib inline
import matplotlib.pyplot as plt
x = np.linspace(-5, 5, 100)
y = 1 / (1 + np.exp(-x))
plt.plot(x, y)
[<matplotlib.lines.Line2D at 0x10a05d330>]
We just adapt the logit function to work our document features $x_i$, and some weights $w_i$:
$$ P(Y = 0 | X) = \frac{1}{1 + \textrm{exp}(w_0 + \sum_i w_i X_i)} $$
Then finding $P(Y = 1 | X)$ is just $1 - P(Y = 0 | X)$, or
$$ P(Y = 1 | X) = \frac{\textrm{exp}(w_0 + \sum_i w_i X_i)}{1 + \textrm{exp}(w_0 + \sum_i w_i X_i)} $$
This second equation, for $P(Y = 1 | X)$, arises directly from the fact that these two terms must sum to 1. Write it out yourself if you need convincing!
So how do we train a logistic regression model? Here's where things get a tiny bit trickier than Naive Bayes.
In Naive Bayes, the bag-of-words model was 90% of the classifier. Sure, we needed some marginal probabilities and priors, but the word counting was easily the bulk of it.
Here, the word counting is still important, but now we have this entire array of weights we didn't have before. These weights correspond to feature relevance--how important the features are to prediction. In Naive Bayes we just kind of assumed that was implicit in the count of the words--higher counts, more relevance. But logistic regression separates these concepts, meaning we now have to learn the weights on our own.
We have our training data: $\{(X^{(j)}, y^{(j)})\}_{j = 1}^n$, and each $X^{(j)} = (x^{(j)}_1, ..., x^{(j)}_d)$ for $d$ features/dimensions/words.
And we want to learn: $\hat{\textbf{w}} = \textrm{argmax}_{\textbf{w}} \Pi_{j = 1}^n P(y^{(j)} | X^{(j)}, \textbf{w})$
Our conditional log likelihood then takes the form: $l(\textbf{w}) = \textrm{ln} \Pi_j P(y^j | \vec{x}^j, \textbf{w})$
$$ = \sum_j \left[ y^j (w_0 + \sum_i^d w_i x_i^j) - \textrm{ln}(1 - \textrm{exp}(w_0 + \sum_i^d w_i x_i^j)) \right] $$
How did we get here?
First, note that the likelihood function is typically formally denoted as
$$ W \leftarrow \textrm{arg max}_W \Pi_l P(Y^l | X^l, W) $$
for each training example $X^l$ with corresponding ground-truth label $Y^l$ (they are multipled together because we assume each observation is independent of the other). We include the weights $W$ in this expression because the probability is absolutely a function of the weights, and we want to pick the combination of weights $W$ that make the probability expression as large as possible.
Second, because we're both pragmatic enough to use a short-cut whenever we can and evil enough to know it'll confuse other people, we never actually work directly with the likelihood as stated above. Instead, we work with the log-likelihood, by literally taking the log of the function:
$$ W \leftarrow \textrm{arg max}_W \sum_l \textrm{ln} P(Y^l | X^l, W) $$
Recall that the log of a product is equivalent to the sum of logs.
Third, the probability statement $P(Y^l | X^l, W)$ has two main terms, since $Y$ can be either 1 or 0; we want to pick the one with the largest probability. So we expand that term into the following:
$$ l(W) = \sum_l Y^l \textrm{ln} P(Y^l = 1|X^l, W) + (1 - Y^l) \textrm{ln} P(Y^l = 0 | X^l, W) $$
where $l(W)$ is our log-likehood function.
Hopefully this looks somewhat familiar to you: it's a lot like finding the expected value $E[X]$ of a discrete random variable $X$, where you take each possible value $X = x$ and multiply it by its probability $P(X = x)$, summing them all together. You can see the case $Y = 1$ on the left, and $Y = 0$ on the right, both being multiplied by their corresponding conditional probabilities.
Hopefully you'll also note: since you're using this equation for training, $Y^l$ will take ONLY 1 or 0, therefore zero-ing out one side of the equation or the other for every single training instance. So that's kinda nice?
Fourth, get ready for some math! If we have
$$ l(W) = \sum_l Y^l \textrm{ln} P(Y^l = 1|X^l, W) + (1 - Y^l) \textrm{ln} P(Y^l = 0 | X^l, W) $$
Expand the last term:
$$ l(W) = \sum_l Y^l \textrm{ln} P(Y^l = 1|X^l, W) + \textrm{ln} P(Y^l = 0 | X^l, W) - Y^l \textrm{ln} P(Y^l = 0|X^l, W) $$
Combine terms with the same $Y^l$ coefficient (first and third terms):
$$ l(W) = \sum_l Y^l \left[ \textrm{ln} P(Y^l = 1|X^l, W) - \textrm{ln} P(Y^l = 0|X^l, W) \right] + \textrm{ln} P(Y^l = 0 | X^l, W) $$
Recall properties of logarithms--when subtracting two logs with the same base, you can combine their arguments into a single log dividing the two:
$$ l(W) = \sum_l Y^l \left[ \textrm{ln} \frac{P(Y^l = 1|X^l, W)}{P(Y^l = 0|X^l, W)} \right] + \textrm{ln} P(Y^l = 0 | X^l, W) $$
Now things get interesting--remember earlier where we defined exact parametric forms of $P(Y = 1 | X)$ and $P(Y = 0|X)$? Substitute those back in, and you'll get:
$$ l(W) = \sum_l \left[ Y^l (w_0 + \sum_i^d w_i X_i^l) - \textrm{ln}(1 - \textrm{exp}(w_0 + \sum_i^d w_i X_i^l)) \right] $$
which is exactly the equation we had before we started going through these proofs.
Good news! $l(\textbf{w})$ is a concave function of $\textbf{w}$, meaning no pesky local optima.
Bad news! No closed-form version of $l(\textbf{w})$ to find explicit values (feel free to try and take its derivative, set it to 0, and solve; it's a transcendental function, so it has no closed-form solution).
Good news! Concave (convex) functions are easy to optimize!
Maximum of a concave function = minimum of a convex function
Gradient: $\nabla_{\textbf{w}} l(\textbf{w}) = \left[ \frac{\partial l(\textbf{w})}{\partial w_0}, ..., \frac{\partial l(\textbf{w})}{\partial w_n} \right] $
Update rule: $w_i^{(t + 1)} = w_i^{(t)} + \eta \frac{\partial l(\textbf{w})}{\partial w_i}$
Which ultimately leads us to gradient ascent for logistic regression.
This is Assignment 1!
In addition to going over some basic concepts in probability, Naive Bayes, and Logistic Regression, you'll also implement some document classification code from scratch (don't let me catch anyone using scikit-learn, mmk).
The hardest part in the coding will be implementing gradient descent! It's not a lot of code--especially if you use NumPy vectorized programming--but it will take some sitting-and-thinking-and-whiteboarding time (unless you know this stuff cold already, I suppose)!
There is also some theory and small proofs.
Don't be intimidated. I purposely made this homework tricky both to get an idea of your level of understanding of the topics so I can gauge how to proceed in the course, and also so you have an idea where your weaknesses are.
ASK ME FOR HELP! Helping students is literally my day job. Don't be shy; if you're stuck, reach out for help, both from me AND your student colleagues!