Space efficient algorithms play an important role in dealing with large amount of data. In such settings, one would like to analyze the large data using small amount of “working space”. One of the key steps in many algorithms for analyzing large data is to maintain a (or a small number) random sample from the data points. In this paper, we consider two space restricted settings—(i) the streaming model, where data arrives over time and one can use only a small amount of storage, and (ii) the query model, where we can structure the data in low space and answer sampling queries. In this paper, we prove the following results in the above two settings:

In the streaming setting, we would like to maintain a random sample from the elements seen so far. We prove that one can maintain a random sample using
$$O(\log n)$$
random bits and
$$O(\log n)$$
bits of space, where *n* is the number of elements seen so far. We can extend this to the case when elements have weights as well.

In the query model, there are *n* elements with weights
$$w_1, \ldots , w_n$$
(which are *w*-bit integers) and one would like to sample a random element with probability proportional to its weight. Bringmann and Larsen (STOC 2013) showed how to sample such an element using
$$nw +1 $$
bits of space (whereas, the information theoretic lower bound is *nw*). We consider the approximate sampling problem, where we are given an error parameter
$$\varepsilon $$
, and the sampling probability of an element can be off by an
$$\varepsilon $$
factor. We give matching upper and lower bounds for this problem.