-3

I've made a program that scrapes all the pages of a website using goroutines:

func main() {
    start := time.Now()

    knownUrls := getKnownURLs(os.Getenv("SITEMAP_URL"))

    var wg sync.WaitGroup
    for index, url := range knownUrls {
        wg.Add(1)

        fmt.Printf("%d/%d\n", index+1, len(knownUrls))

        go func() {
            if err := indexArticleFromURL(url, client); err != nil {
                log.Fatalf("Error indexing doc: %s", err)
            }
            wg.Done()
        }()
    }

    wg.Wait()

    elapsed := time.Since(start)
    fmt.Printf("Took %s", elapsed)
}

This works shockingly fast, 5.9s for a thousand pages to be exact. But it bothers me that if a website has thousands of pages it will create thousands of goroutines.

So I refactored it with a package called semaphore. From what I understand it should limit the amount of goroutines to what the processor can handle. Shouldn't decrease performance, since the program above already physically could not use more threads than the CPU can provide.

func main() {
    start := time.Now()
    ctx := context.Background()

    knownUrls := getKnownURLs(os.Getenv("SITEMAP_URL"))

    var (
        maxWorkers = runtime.GOMAXPROCS(0)
        sem        = semaphore.NewWeighted(int64(maxWorkers))
    )

    for index, url := range knownUrls {
        if err := sem.Acquire(ctx, 1); err != nil {
            log.Printf("Failed to acquire semaphore: %v", err)
            break
        }

        fmt.Printf("%d/%d\n", index+1, len(knownUrls))

        go func() {
            if err := indexDocFromURL(url, client); err != nil {
                log.Fatalf("Error indexing doc: %s", err)
            }
            sem.Release(1)
        }()
    }

    if err := sem.Acquire(ctx, int64(maxWorkers)); err != nil {
        log.Printf("Failed to acquire semaphore: %v", err)
    }

    elapsed := time.Since(start)
    fmt.Printf("Took %s", elapsed)
}

But now when I run the program it takes significantly more time: 11+ seconds.

Seems like this shouldn't be the case, since runtime.GOMAXPROCS(0) returns the maximum number of CPUs that can be executing simultaneously.

Why is the semaphore version slower? And how do I make it match the performance of the unsafe program, while making sure the number of goroutines will not crash it?

2
  • 3
    You can have far more goroutines waiting on I/O concurrently than you have CPU cores for, since your workload isn't CPU-bound. Which is all to say that this isn't an appropriate use case for semaphore. Commented Jul 15, 2024 at 22:25
  • (...and to put a point on it, the "unsafe" program was never unsafe to begin with. If you determine that there's a limit beyond which behavior deteriorates in some way -- say, the server you're connecting to timing out -- then sure, use a semaphore, but place the number of slots at that limit beyond which things start to go wrong, not at the CPU core count). Commented Jul 15, 2024 at 22:32

2 Answers 2

5

With your original code, you have one thread per CPU core, but you have more goroutines than threads. This is fine and normal: The Go runtime internally task-switches between goroutines without getting the kernel scheduler involved, parking one whenever it's waiting for I/O and switching to another one. If a task is 99.999% waiting for a network resource and .0001% CPU, then one CPU core can comfortably handle 1,000,000 goroutines at a time -- you need enough memory to allocate the heap, and the network protocol needs to be latency-tolerant enough that the remote server won't time out if a goroutine takes some time to be scheduled (and if your connections are to the same server, it needs to be willing to handle that load) but as long as you have that memory, and the remote service (and intervening network stack) is just as robust as your client-side code is, you're fine. (HTTP/2 supports multiplexing to run an unbounded number of requests over a single TCP connection -- hopefully you're using it here).


When you introduce a semaphore with only as many slots as CPU cores, you completely defeat this functionality: Now instead of being able to balance thousands of requests at a time (by working on ones that are ready and parking the ones that aren't), you're slowing your code down to only process as many requests as CPU cores exist. Of course it's slower; how could it be anything but?

Sign up to request clarification or add additional context in comments.

Comments

1

Goroutines are not threads or processes. It's close to a coroutine. Therefore, there is no burden even if you use thousands of goroutines, which are much more than the number of CPU cores.

However, if you end up using too many goroutines, it may be necessary to propose them. For example, limiting simultaneous access to resources, etc...

In this case, it is recommended to use the golang.org/x/sync/errgroup package. Internally, the maximum number is controlled using a mutex.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.