什么是耳语?
Whisper 是来自 OpenAI 的最先进的语音识别系统,它已经接受了 680,000 小时从网络收集的多语言和多任务监督数据的训练。这个庞大而多样化的数据集提高了对口音、背景噪音和技术语言的鲁棒性。此外,它还支持多种语言的转录,以及将这些语言翻译成英语。OpenAI 发布了模型和代码,作为构建利用语音识别的有用应用程序的基础。
Whisper 的一大缺点是,它无法告诉您谁在对话中发言。这是分析对话时的一个问题。这就是二元化的用武之地。二元化是识别对话中谁在说话的过程。
在本教程中,您将学习如何识别说话者,然后将他们与 Whisper 的转录进行匹配。我们将使用pyannote-audio
它来完成此操作。让我们开始吧!
准备音频
首先,我们需要准备音频文件。我们将使用 Lex Fridmans 与 Yann LeCun 的播客的前 20 分钟。要下载视频和提取音频,我们将使用yt-dlp
package.
!pip install -U yt-dlp
我们还需要安装 ffmpeg
!wget -O - -q https://github.com/yt-dlp/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-linux64-gpl.tar.xz | xz -qdc| tar -x
现在我们可以通过命令行进行实际的下载和音频提取。
!yt-dlp -xv --ffmpeg-location ffmpeg-master-latest-linux64-gpl/bin --audio-format wav -o download.wav -- https://youtu.be/SGzMElJ11Cc
download.wav
现在我们的工作目录中有了该文件。让我们剪掉音频的前 20 分钟。我们只需几行代码就可以使用 pydub 包。
!pip install pydub
from pydub import AudioSegmentt1 = 0 * 1000 # works in millisecondst2 = 20 * 60 * 1000newAudio = AudioSegment.from_wav("download.wav")a = newAudio[t1:t2]a.export("audio.wav", format="wav")
audio.wav
现在是音频文件的前 20 分钟。
Pyannote 的分类
pyannote.audio
是用 Python 编写的用于说话人分类的开源工具包。基于 PyTorch 机器学习框架,它提供了一组可训练的端到端神经构建块,这些构建块可以组合并联合优化以构建说话人二值化管道。pyannote.audio
还配备预训练模型和管道,涵盖语音活动检测、说话人分割、重叠语音检测、说话人嵌入等广泛领域,其中大多数都达到了最先进的性能。
安装 Pyannote 并在视频音频上运行它以生成二值化。
!pip install pyannote.audio
from pyannote.audio import Pipelinepipeline = Pipeline.from_pretrained('pyannote/speaker-diarization')
DEMO_FILE = {'uri': 'blabal', 'audio': 'audio.wav'}dz = pipeline(DEMO_FILE) with open("diarization.txt", "w") as text_file: text_file.write(str(dz))
让我们打印出来看看它是什么样子。
print(*list(dz.itertracks(yield_label = True))[:10], sep="\\n")
输出:
(<Segment(2.03344, 36.8128)>, 0, 'SPEAKER_00')(<Segment(38.1122, 51.3759)>, 0, 'SPEAKER_00')(<Segment(51.8653, 90.2053)>, 1, 'SPEAKER_01')(<Segment(91.2853, 92.9391)>, 1, 'SPEAKER_01')(<Segment(94.8628, 116.497)>, 0, 'SPEAKER_00')(<Segment(116.497, 124.124)>, 1, 'SPEAKER_01')(<Segment(124.192, 151.597)>, 1, 'SPEAKER_01')(<Segment(152.018, 179.12)>, 1, 'SPEAKER_01')(<Segment(180.318, 194.037)>, 1, 'SPEAKER_01')(<Segment(195.016, 207.385)>, 0, 'SPEAKER_00')
这看起来已经很不错了,但让我们稍微清理一下数据:
def millisec(timeStr): spl = timeStr.split(":") s = (int)((int(spl[0]) * 60 * 60 + int(spl[1]) * 60 + float(spl[2]) )* 1000) return simport redz = open('diarization.txt').read().splitlines()dzList = []for l in dz: start, end = tuple(re.findall('[0-9]+:[0-9]+:[0-9]+\\.[0-9]+', string=l)) start = millisec(start) - spacermilli end = millisec(end) - spacermilli lex = not re.findall('SPEAKER_01', string=l) dzList.append([start, end, lex])print(*dzList[:10], sep='\\n')
[33, 34812, True][36112, 49375, True][49865, 88205, False][89285, 90939, False][92862, 114496, True][114496, 122124, False][122191, 149596, False][150018, 177119, False][178317, 192037, False][193015, 205385, True]
现在我们在列表中有了二值化数据。前两个数字是以毫秒为单位的扬声器段的开始和结束时间。第三个数字是一个布尔值,它告诉我们说话者是否是 Lex。
从 diarization 准备音频文件
接下来,我们将根据 diarization 附加音频片段,以空格符作为分隔符。
from pydub import AudioSegmentimport re sounds = spacersegments = []dz = open('diarization.txt').read().splitlines()for l in dz: start, end = tuple(re.findall('[0-9]+:[0-9]+:[0-9]+\\.[0-9]+', string=l)) start = int(millisec(start)) #milliseconds end = int(millisec(end)) #milliseconds segments.append(len(sounds)) sounds = sounds.append(audio[start:end], crossfade=0) sounds = sounds.append(spacer, crossfade=0)sounds.export("dz.wav", format="wav") #Exports to a wav file in the current path.
print(segments[:8])
[2000, 38779, 54042, 94382, 98036, 121670, 131297, 160702]
用耳语转录
接下来,我们将使用 Whisper 转录音频文件的不同片段。重要提示:与 pyannote.audio 存在版本冲突导致错误。我们的解决方法是先运行 Pyannote,然后再 whisper。您可以安全地忽略该错误。
安装 Open AI Whisper。
!pip install git+https://github.com/openai/whisper.git
在准备好的音频文件上运行 Open AI whisper。它将转录写入文件。您可以根据需要调整模型大小。你可以在 Github 上的模型卡片上找到所有模型。
!whisper dz.wav --language en --model base
[00:00.000 --> 00:04.720] The following is a conversation with Yann LeCun,[00:04.720 --> 00:06.560] his second time on the podcast.[00:06.560 --> 00:11.160] He is the chief AI scientist at Meta, formerly Facebook,[00:11.160 --> 00:15.040] professor at NYU, touring award winner,[00:15.040 --> 00:17.600] one of the seminal figures in the history[00:17.600 --> 00:20.460] of machine learning and artificial intelligence,...
为了使用 .vtt 文件,我们需要安装 webvtt-py 库。
!pip install -U webvtt-py
让我们看一下数据:
import webvttcaptions = [[(int)(millisec(caption.start)), (int)(millisec(caption.end)), caption.text] for caption in webvtt.read('dz.wav.vtt')]print(*captions[:8], sep='\\n')
[0, 4720, 'The following is a conversation with Yann LeCun,'][4720, 6560, 'his second time on the podcast.'][6560, 11160, 'He is the chief AI scientist at Meta, formerly Facebook,'][11160, 15040, 'professor at NYU, touring award winner,'][15040, 17600, 'one of the seminal figures in the history'][17600, 20460, 'of machine learning and artificial intelligence,'][20460, 23940, 'and someone who is brilliant and opinionated'][23940, 25400, 'in the best kind of way,']...
匹配转录和分类
接下来,我们会将每个转录行与一些分类进行匹配,并通过生成 HTML 文件来显示所有内容。为了获得正确的时间,我们应该注意原始音频中没有二分化段的部分。我们为音频中的每个片段附加一个新的 div。
# we need this fore our HTML file (basicly just some styling)preS = '<!DOCTYPE html>\\n<html lang="en">\\n <head>\\n <meta charset="UTF-8">\\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\\n <meta http-equiv="X-UA-Compatible" content="ie=edge">\\n <title>Lexicap</title>\\n <style>\\n body {\\n font-family: sans-serif;\\n font-size: 18px;\\n color: #111;\\n padding: 0 0 1em 0;\\n }\\n .l {\\n color: #050;\\n }\\n .s {\\n display: inline-block;\\n }\\n .e {\\n display: inline-block;\\n }\\n .t {\\n display: inline-block;\\n }\\n #player {\\n\\t\\tposition: sticky;\\n\\t\\ttop: 20px;\\n\\t\\tfloat: right;\\n\\t}\\n </style>\\n </head>\\n <body>\\n <h2>Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning | Lex Fridman Podcast #258</h2>\\n <div id="player"></div>\\n <script>\\n var tag = document.createElement(\\'script\\');\\n tag.src = "https://www.youtube.com/iframe_api";\\n var firstScriptTag = document.getElementsByTagName(\\'script\\')[0];\\n firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);\\n var player;\\n function onYouTubeIframeAPIReady() {\\n player = new YT.Player(\\'player\\', {\\n height: \\'210\\',\\n width: \\'340\\',\\n videoId: \\'SGzMElJ11Cc\\',\\n });\\n }\\n function setCurrentTime(timepoint) {\\n player.seekTo(timepoint);\\n player.playVideo();\\n }\\n </script><br>\\n'postS = '\\t</body>\\n</html>'from datetime import timedeltahtml = list(preS)for i in range(len(segments)): idx = 0 for idx in range(len(captions)): if captions[idx][0] >= (segments[i] - spacermilli): break; while (idx < (len(captions))) and ((i == len(segments) - 1) or (captions[idx][1] < segments[i+1])): c = captions[idx] start = dzList[i][0] + (c[0] -segments[i]) if start < 0: start = 0 idx += 1 start = start / 1000.0 startStr = '{0:02d}:{1:02d}:{2:02.2f}'.format((int)(start // 3600), (int)(start % 3600 // 60), start % 60) html.append('\\t\\t\\t<div class="c">\\n') html.append(f'\\t\\t\\t\\t<a class="l" href="#{startStr}" id="{startStr}">link</a> |\\n') html.append(f'\\t\\t\\t\\t<div class="s"><a href="javascript:void(0);" onclick=setCurrentTime({int(start)})>{startStr}</a></div>\\n') html.append(f'\\t\\t\\t\\t<div class="t">{"[Lex]" if dzList[i][2] else "[Yann]"} {c[2]}</div>\\n') html.append('\\t\\t\\t</div>\\n\\n')html.append(postS)s = "".join(html)with open("lexicap.html", "w") as text_file: text_file.write(s)print(s)
您可以在此处查看结果或以笔记本形式查看完整代码
谢谢你!– AI未来百科 ; 探索AI的边界与未来! 懂您的AI未来站