mirror of
https://framagit.org/veretcle/oolatoocs.git
synced 2025-12-06 06:43:15 +01:00
Compare commits
42 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
498095d3a8 | ||
|
|
5ee64014eb | ||
|
|
639582ba59 | ||
|
|
43ca862d5a | ||
|
|
47d7fdbd42 | ||
|
|
7334fb3d09 | ||
|
|
79ac915347 | ||
|
|
e89e6e51ec | ||
|
|
7b21a0e3a7 | ||
|
|
43aa6dcd99 | ||
|
|
cf5fe11b56 | ||
|
|
7bd0843cf6 | ||
|
|
402fcffc75 | ||
|
|
b295cc5b94 | ||
|
|
a882aaa59d | ||
|
|
259032a7b9 | ||
|
|
e7f0c9c6f5 | ||
|
|
83c8da46e8 | ||
|
|
823f80729f | ||
|
|
5969e3a56a | ||
|
|
3ea2478512 | ||
|
|
5606d00da2 | ||
|
|
4cb80b0607 | ||
|
|
bbe14f1f30 | ||
|
|
6fbc011914 | ||
|
|
8f23c2459b | ||
|
|
26805feadb | ||
|
|
3a8fd538fc | ||
|
|
891f46ec2f | ||
|
|
cad7840f98 | ||
|
|
6e7299585a | ||
|
|
62efcb8112 | ||
|
|
49279e7f1f | ||
|
|
f05669923f | ||
|
|
9da43beb34 | ||
|
|
e99a666b18 | ||
|
|
3b18dac2fb | ||
|
|
af977a1ee0 | ||
|
|
b90b727783 | ||
|
|
f8227f99c1 | ||
|
|
9f2ff119ff | ||
|
|
c0244c8c30 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1,3 +1,5 @@
|
||||
/target
|
||||
.last_tweet
|
||||
.config.toml
|
||||
.config.json
|
||||
.bsky.json
|
||||
|
||||
2871
Cargo.lock
generated
2871
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
17
Cargo.toml
17
Cargo.toml
@@ -1,28 +1,29 @@
|
||||
[package]
|
||||
name = "oolatoocs"
|
||||
version = "3.0.3"
|
||||
version = "4.5.1"
|
||||
edition = "2021"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
rand = "^0.8" # /!\ to be removed
|
||||
chrono = "^0.4"
|
||||
clap = "^4"
|
||||
env_logger = "^0.10"
|
||||
env_logger = "^0.11"
|
||||
futures = "^0.3"
|
||||
html-escape = "^0.2"
|
||||
log = "^0.4"
|
||||
megalodon = "^0.13"
|
||||
oauth1-request = "^0.6"
|
||||
regex = "^1.10"
|
||||
reqwest = { version = "^0.11", features = ["json", "stream", "multipart"] }
|
||||
rusqlite = { version = "^0.30", features = ["chrono"] }
|
||||
reqwest = { version = "^0.12", features = ["json", "stream", "multipart"] }
|
||||
rusqlite = { version = "^0.33", features = ["chrono"] }
|
||||
serde = { version = "^1.0", features = ["derive"] }
|
||||
tokio = { version = "^1.33", features = ["rt-multi-thread", "macros", "time"] }
|
||||
tokio = { version = "^1.33", features = ["rt-multi-thread", "macros"] }
|
||||
toml = "^0.8"
|
||||
bsky-sdk = "^0.1"
|
||||
atrium-api = "^0.24"
|
||||
atrium-api = { version = "^0.25", features = ["namespace-appbsky"] }
|
||||
image = "^0.25"
|
||||
webp = "^0.3"
|
||||
megalodon = "^1.1"
|
||||
|
||||
[profile.release]
|
||||
strip = true
|
||||
|
||||
48
README.md
48
README.md
@@ -1,24 +1,30 @@
|
||||
# oolatoocs, a Mastodon to Twitter bot
|
||||
# oolatoocs, a Mastodon to Bluesky bot
|
||||
## A little bit of history
|
||||
|
||||
So what is it? Originally, I wrote, with some help, [Scootaloo](https://framagit.org/veretcle/scootaloo/) which was a Twitter to Mastodon Bot to help the [writers at NintendojoFR](https://www.nintendojo.fr) not to worry about Mastodon: the vast majority of writers were posting to Twitter, the bot scooped everything and arranged it properly for Mastodon and everything was fine and dandy. It was also used, in an altered beefed-up version, for [Nupes.social](https://nupes.social) to make the tweets from the NUPES political alliance on Twitter, more easily accessible in Mastodon.
|
||||
So what is it? Originally, I wrote, with some help, [Scootaloo](https://framagit.org/veretcle/scootaloo/) which was a Twitter to Mastodon Bot to help the [writers at NintendojoFR](https://www.nintendojo.fr) not to worry about Mastodon: the vast majority of writers were posting to Twitter, the bot scooped everything and arranged it properly for Mastodon and everything was fine and dandy. It was also used, in an altered beefed-up version, for the (now defunct) Mastodon Instance [Nupes.social](https://nupes.social) to make the tweets from the NUPES political alliance on Twitter, more easily accessible for Mastodon users.
|
||||
|
||||
But then Elon came, and we couldn’t read data from Twitter anymore. So we had to rely on copy/pasting things from one to another, which is not fun nor efficient.
|
||||
|
||||
Hence `oolatoocs`, which takes a Mastodon Timeline and reposts it to Twitter as properly as possible. And since Bluesky seems to be hype right now, it also incorporates Bluesky support since v3.0.0.
|
||||
## And now…
|
||||
|
||||
Bluesky support is mandatory for now on: you can’t have Twitter or Bluesky, you must have both. I might change this behaviour in a near future, especially when I will inevitably have to drop support for Twitter. If you just want Twitter support, just stick with v2.4.x release, it’ll get the job done exactly as the newer version for now.
|
||||
Hence `oolatoocs`, which takes a Mastodon Timeline and reposts it to Bluesky as properly as possible.
|
||||
|
||||
If you don’t want Twitter support, open an issue and I will get motivated to comply (maybe…).
|
||||
Since 2025-01-20, Twitter is now longer supported.
|
||||
|
||||
# Remarkable features
|
||||
|
||||
What it can do:
|
||||
* Reproduces the Toot content into the Tweet/Record;
|
||||
* Cuts (poorly) the Toot in half in it’s too long for Twitter/Bluesky and thread it (this is cut using a word count, not the best method, but it gets the job done);
|
||||
* Reuploads images/gifs/videos from Mastodon to Twitter/Bluesky
|
||||
* Can reproduce threads from Mastodon to Twitter/Bluesky
|
||||
* Can reproduce poll from Mastodon to Twitter/Bluesky
|
||||
* Can prevent a Toot from being tweeted/recorded to Bluesky by using the #NoTweet (case-insensitive) hashtag in Mastodon
|
||||
* Reproduces the Toot content into the Record;
|
||||
* Cuts (poorly) the Toot in half in it’s too long for Bluesky and thread it (this is cut using a word count, not the best method, but it gets the job done);
|
||||
* Reuploads images/gifs/videos/webcards from Mastodon to Bluesky
|
||||
* ⚠️ Bluesky does not support mixing images and videos. You can have up to 4 images on a Bsky record **or** 1 video but not mix around. If you do so, only the video will be posted on Bluesky.
|
||||
* ⚠️ Bluesky does not support images greater than 1Mb (that is 1,000,000 bytes or 976.6 KiB), so Oolatoocs converts the image to WebP and progressively reduces the quality to fit that limitation.
|
||||
* ⚠️ Bluesky does not support webcards with any other media/quote, so webcards have the last priority
|
||||
* Can reproduce threads from Mastodon to Bluesky
|
||||
* Can reproduce (self-)quotes from Mastodon to Bluesky
|
||||
* ⚠️ Bluesky can’t do quotes with webcards, you can only embed images **or** a video with quotes
|
||||
* ⚠️ Bluesky does support polls for now. So the poll itself is just presented as text from Mastodon instead which is not the most elegant.
|
||||
* Can prevent a Toot from being recorded to Bluesky by using the #NoTweet (case-insensitive) hashtag in Mastodon
|
||||
|
||||
# Configuration file
|
||||
|
||||
@@ -26,7 +32,8 @@ The configuration is relatively easy to follow:
|
||||
|
||||
```toml
|
||||
[oolatoocs]
|
||||
db_path = "/var/lib/oolatoocs/db.sqlite3" # the path to the DB where toot/tweet are stored
|
||||
db_path = "/var/lib/oolatoocs/db.sqlite3" # the path to the DB where toots/tweets/records are stored
|
||||
remove_hashtags = false # optional, default to false
|
||||
|
||||
[mastodon] # This part can be generated, see below
|
||||
base = "https://m.nintendojo.fr"
|
||||
@@ -35,15 +42,10 @@ client_secret = "<REDACTED>"
|
||||
redirect = "urn:ietf:wg:oauth:2.0:oob"
|
||||
token = "<REDACTED>"
|
||||
|
||||
[twitter] # you’ll have to get this part from Twitter, this can be done via https://developer.twitter.com/en
|
||||
consumer_key = "<REDACTED>"
|
||||
consumer_secret = "<REDACTED>"
|
||||
oauth_token = "<REDACTED>"
|
||||
oauth_token_secret = "<REDACTED>"
|
||||
|
||||
[bluesky]
|
||||
[bluesky] # this is your Bsky handle and password + a writable path for the session handling
|
||||
handle = "nintendojofr.bsky.social"
|
||||
password = "<REDACTED>"
|
||||
config_path = "/var/lib/oolatoocs/bsky.json"
|
||||
```
|
||||
|
||||
## How to generate the Mastodon keys?
|
||||
@@ -56,15 +58,9 @@ oolatoocs register --host https://<your-instance>
|
||||
|
||||
And follow the instructions.
|
||||
|
||||
## How to generate the Twitter part?
|
||||
|
||||
You’ll need to generate a key. This is a real pain in the ass, but you can use [this script](https://github.com/twitterdev/Twitter-API-v2-sample-code/blob/main/Manage-Tweets/create_tweet.py), modify it and run it to recover you key.
|
||||
|
||||
Will I some day make a subcommand to generate it? Maybe…
|
||||
|
||||
## How to generate the Bluesky part?
|
||||
|
||||
You’ll need your handle and password. I strongly recommend a dedicated application password.
|
||||
You’ll need your handle and password. I strongly recommend a dedicated application password. You’ll also need a writable path to store the Bsky session.
|
||||
|
||||
# How to run
|
||||
|
||||
|
||||
326
src/bsky.rs
326
src/bsky.rs
@@ -1,13 +1,23 @@
|
||||
use crate::config::BlueskyConfig;
|
||||
use crate::{config::BlueskyConfig, utils::convert_aspect_ratio, OolatoocsError};
|
||||
use atrium_api::{
|
||||
app::bsky::feed::post::RecordData, com::atproto::repo::upload_blob::Output,
|
||||
types::string::Datetime, types::string::Language,
|
||||
types::string::Datetime, types::string::Language, types::string::RecordKey,
|
||||
};
|
||||
use bsky_sdk::{
|
||||
agent::config::{Config, FileStore},
|
||||
rich_text::RichText,
|
||||
BskyAgent,
|
||||
};
|
||||
use futures::{stream, StreamExt};
|
||||
use image::ImageReader;
|
||||
use log::{debug, error, warn};
|
||||
use megalodon::entities::{
|
||||
attachment::{Attachment, AttachmentType},
|
||||
card::Card,
|
||||
};
|
||||
use bsky_sdk::{rich_text::RichText, BskyAgent};
|
||||
use log::error;
|
||||
use megalodon::entities::attachment::{Attachment, AttachmentType};
|
||||
use regex::Regex;
|
||||
use std::error::Error;
|
||||
use std::{error::Error, fs::exists, io::Cursor};
|
||||
use webp::*;
|
||||
|
||||
/// Intermediary struct to deal with replies more easily
|
||||
#[derive(Debug)]
|
||||
@@ -16,6 +26,34 @@ pub struct BskyReply {
|
||||
pub root_record_uri: String,
|
||||
}
|
||||
|
||||
pub async fn get_session(config: &BlueskyConfig) -> Result<BskyAgent, Box<dyn Error>> {
|
||||
if exists(&config.config_path)? {
|
||||
let bluesky = BskyAgent::builder()
|
||||
.config(Config::load(&FileStore::new(&config.config_path)).await?)
|
||||
.build()
|
||||
.await?;
|
||||
|
||||
if bluesky.api.com.atproto.server.get_session().await.is_ok() {
|
||||
bluesky
|
||||
.to_config()
|
||||
.await
|
||||
.save(&FileStore::new(&config.config_path))
|
||||
.await?;
|
||||
return Ok(bluesky);
|
||||
}
|
||||
}
|
||||
|
||||
let bluesky = BskyAgent::builder().build().await?;
|
||||
bluesky.login(&config.handle, &config.password).await?;
|
||||
bluesky
|
||||
.to_config()
|
||||
.await
|
||||
.save(&FileStore::new(&config.config_path))
|
||||
.await?;
|
||||
|
||||
Ok(bluesky)
|
||||
}
|
||||
|
||||
pub async fn build_post_record(
|
||||
config: &BlueskyConfig,
|
||||
text: &str,
|
||||
@@ -27,10 +65,10 @@ pub async fn build_post_record(
|
||||
|
||||
let insert_chars = "…";
|
||||
|
||||
let re = Regex::new(r#"(https?://)(\S{1,26})(\S*)"#).unwrap();
|
||||
let re = Regex::new(r#"(https?://)(www\.)?(\S{1,26})(\S*)"#).unwrap();
|
||||
|
||||
while let Some(found) = re.captures(&rt.text.clone()) {
|
||||
if let Some(group) = found.get(3) {
|
||||
if let Some(group) = found.get(4) {
|
||||
if !group.is_empty() {
|
||||
rt.insert(group.start(), insert_chars);
|
||||
rt.delete(
|
||||
@@ -40,7 +78,8 @@ pub async fn build_post_record(
|
||||
}
|
||||
}
|
||||
if let Some(group) = found.get(1) {
|
||||
rt.delete(group.start(), group.start() + group.len());
|
||||
let www: usize = found.get(2).map_or(0, |x| x.len());
|
||||
rt.delete(group.start(), group.start() + www + group.len());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -100,7 +139,7 @@ async fn get_record(
|
||||
cid: None,
|
||||
collection: atrium_api::types::string::Nsid::new("app.bsky.feed.post".to_string())?,
|
||||
repo: atrium_api::types::string::Handle::new(config.to_string())?.into(),
|
||||
rkey: rkey.to_string(),
|
||||
rkey: RecordKey::new(rkey.to_string())?,
|
||||
}
|
||||
.into(),
|
||||
)
|
||||
@@ -109,73 +148,245 @@ async fn get_record(
|
||||
Ok(record)
|
||||
}
|
||||
|
||||
// it’s ugly af but it gets the job done for now
|
||||
pub async fn generate_media_records(
|
||||
/// Generate a Union of embed records to be built-in into records
|
||||
/// In case an embed cannot be uploaded/created, this calling function silently gets Option instead
|
||||
/// of failing
|
||||
pub async fn generate_embed_records(
|
||||
config: &BlueskyConfig,
|
||||
bsky: &BskyAgent,
|
||||
qid: Option<&str>,
|
||||
media_attach: &[Attachment],
|
||||
) -> Option<atrium_api::types::Union<atrium_api::app::bsky::feed::post::RecordEmbedRefs>> {
|
||||
let mut embed: Option<
|
||||
atrium_api::types::Union<atrium_api::app::bsky::feed::post::RecordEmbedRefs>,
|
||||
> = None;
|
||||
let mut images = Vec::new();
|
||||
let mut videos: Vec<atrium_api::app::bsky::embed::video::MainData> = Vec::new();
|
||||
card: &Option<Card>,
|
||||
) -> Result<
|
||||
Option<atrium_api::types::Union<atrium_api::app::bsky::feed::post::RecordEmbedRefs>>,
|
||||
Box<dyn Error + Send + Sync>,
|
||||
> {
|
||||
// handle quote if any
|
||||
let quote_embed = match qid {
|
||||
Some(q) => generate_quote_records(config, q).await.ok(),
|
||||
_ => None,
|
||||
};
|
||||
|
||||
for media in media_attach.iter() {
|
||||
let blob = upload_media(bsky, &media.url).await.unwrap();
|
||||
// handle medias if any
|
||||
let media_embed = if media_attach.len() > usize::from(0u8) {
|
||||
let image_media_attach: Vec<_> = media_attach
|
||||
.iter()
|
||||
.filter(|x| x.r#type == AttachmentType::Image)
|
||||
.cloned()
|
||||
.collect();
|
||||
let video_media_attach: Vec<_> = media_attach
|
||||
.iter()
|
||||
.filter(|x| x.r#type == AttachmentType::Video || x.r#type == AttachmentType::Gifv)
|
||||
.cloned()
|
||||
.collect();
|
||||
|
||||
match media.r#type {
|
||||
AttachmentType::Image => {
|
||||
images.push(
|
||||
if !video_media_attach.is_empty() {
|
||||
generate_video_record(bsky, video_media_attach).await.ok()
|
||||
} else if !image_media_attach.is_empty() {
|
||||
generate_images_records(bsky, image_media_attach).await.ok()
|
||||
} else {
|
||||
return Err(OolatoocsError::new("A media attached is not an image nor a video").into());
|
||||
}
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
// handle webcard if any
|
||||
let webcard_embed = match card {
|
||||
Some(t) => generate_webcard_records(bsky, t).await.ok(),
|
||||
None => None,
|
||||
};
|
||||
|
||||
if let Some(q) = quote_embed {
|
||||
if let Some(m) = media_embed {
|
||||
let medias_mapped = match m {
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedImagesMain(a) => atrium_api::app::bsky::embed::record_with_media::MainMediaRefs::AppBskyEmbedImagesMain(a),
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedVideoMain(a) => atrium_api::app::bsky::embed::record_with_media::MainMediaRefs::AppBskyEmbedVideoMain(a),
|
||||
_ => return Err(OolatoocsError::new("Something went terribly wrong when trying to add image/video to quote record: can’t decapsulate media").into()),
|
||||
};
|
||||
let quote_mapped = match q {
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedRecordMain(
|
||||
a,
|
||||
) => a,
|
||||
_ => return Err(OolatoocsError::new("Something went terribly wrong when trying to add image/video to quote record: can’t decapsulate quote").into()),
|
||||
};
|
||||
Some(atrium_api::types::Union::Refs(
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedRecordWithMediaMain(
|
||||
Box::new(
|
||||
atrium_api::app::bsky::embed::record_with_media::MainData {
|
||||
media: atrium_api::types::Union::Refs(medias_mapped),
|
||||
record: (*quote_mapped),
|
||||
}
|
||||
.into(),
|
||||
),
|
||||
),
|
||||
))
|
||||
} else {
|
||||
Some(atrium_api::types::Union::Refs(q))
|
||||
}
|
||||
} else if media_embed.is_some() {
|
||||
media_embed.map(atrium_api::types::Union::Refs)
|
||||
} else if webcard_embed.is_some() {
|
||||
webcard_embed.map(atrium_api::types::Union::Refs)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
/// Generate an quote embed record
|
||||
/// it is encapsulated in Option to prevent this function from failing
|
||||
async fn generate_quote_records(
|
||||
config: &BlueskyConfig,
|
||||
quote_id: &str,
|
||||
) -> Result<atrium_api::app::bsky::feed::post::RecordEmbedRefs, Box<dyn Error>> {
|
||||
// if we can’t match the quote_id, simply return None
|
||||
let quote_record = get_record(&config.handle, &rkey(quote_id)).await?;
|
||||
|
||||
Ok(
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedRecordMain(Box::new(
|
||||
atrium_api::app::bsky::embed::record::MainData {
|
||||
record: atrium_api::com::atproto::repo::strong_ref::MainData {
|
||||
cid: quote_record.data.cid.unwrap(),
|
||||
uri: quote_record.data.uri.to_owned(),
|
||||
}
|
||||
.into(),
|
||||
}
|
||||
.into(),
|
||||
)),
|
||||
)
|
||||
}
|
||||
|
||||
/// Generate an embed webcard record into Bsky
|
||||
/// If the preview image does not exist or fails to upload, it is simply ignored
|
||||
async fn generate_webcard_records(
|
||||
bsky: &BskyAgent,
|
||||
card: &Card,
|
||||
) -> Result<atrium_api::app::bsky::feed::post::RecordEmbedRefs, Box<dyn Error + Send + Sync>> {
|
||||
let blob = match &card.image {
|
||||
Some(url) => upload_media(true, bsky, url).await?.blob.clone().into(),
|
||||
None => None,
|
||||
};
|
||||
|
||||
let record_card = atrium_api::app::bsky::embed::external::ExternalData {
|
||||
description: card.description.clone(),
|
||||
thumb: blob,
|
||||
title: card.title.clone(),
|
||||
uri: card.url.clone(),
|
||||
};
|
||||
|
||||
Ok(
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedExternalMain(Box::new(
|
||||
atrium_api::app::bsky::embed::external::MainData {
|
||||
external: record_card.into(),
|
||||
}
|
||||
.into(),
|
||||
)),
|
||||
)
|
||||
}
|
||||
|
||||
/// Generate an array of Bsky image media records
|
||||
async fn generate_images_records(
|
||||
bsky: &BskyAgent,
|
||||
media_attach: Vec<Attachment>,
|
||||
) -> Result<atrium_api::app::bsky::feed::post::RecordEmbedRefs, Box<dyn Error + Send + Sync>> {
|
||||
let mut stream = stream::iter(media_attach)
|
||||
.map(|media| {
|
||||
let bsky = bsky.clone();
|
||||
tokio::task::spawn(async move {
|
||||
debug!("Treating media {}", &media.url);
|
||||
upload_media(true, &bsky, &media.url).await.map(|i| {
|
||||
atrium_api::app::bsky::embed::images::ImageData {
|
||||
alt: media
|
||||
.description
|
||||
.clone()
|
||||
.map_or("".to_string(), |v| v.to_owned()),
|
||||
aspect_ratio: None,
|
||||
image: blob.data.blob,
|
||||
aspect_ratio: convert_aspect_ratio(
|
||||
&media.meta.as_ref().and_then(|m| m.original.clone()),
|
||||
),
|
||||
image: i.data.blob,
|
||||
}
|
||||
.into(),
|
||||
);
|
||||
}
|
||||
AttachmentType::Gifv | AttachmentType::Video => {
|
||||
videos.push(atrium_api::app::bsky::embed::video::MainData {
|
||||
alt: media.description.clone(),
|
||||
aspect_ratio: None,
|
||||
captions: None,
|
||||
video: blob.data.blob,
|
||||
});
|
||||
}
|
||||
_ => {
|
||||
error!("Not an image, not a video, what happened here?");
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
.buffered(4);
|
||||
|
||||
let mut images = Vec::new();
|
||||
|
||||
while let Some(result) = stream.next().await {
|
||||
match result {
|
||||
Ok(Ok(v)) => images.push(v.into()),
|
||||
Ok(Err(e)) => warn!("Cannot treat a specific media: {}", e),
|
||||
Err(e) => error!("Something went wrong when joining main thread: {}", e),
|
||||
}
|
||||
}
|
||||
|
||||
if !images.is_empty() {
|
||||
embed = Some(atrium_api::types::Union::Refs(
|
||||
return Ok(
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedImagesMain(Box::new(
|
||||
atrium_api::app::bsky::embed::images::MainData { images }.into(),
|
||||
)),
|
||||
));
|
||||
);
|
||||
}
|
||||
|
||||
// if a video has been uploaded, it takes priority as you can only have 1 video per post
|
||||
if !videos.is_empty() {
|
||||
embed = Some(atrium_api::types::Union::Refs(
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedVideoMain(Box::new(
|
||||
videos[0].clone().into(),
|
||||
)),
|
||||
))
|
||||
}
|
||||
|
||||
embed
|
||||
Err(OolatoocsError::new("Cannot embed media").into())
|
||||
}
|
||||
|
||||
async fn upload_media(bsky: &BskyAgent, u: &str) -> Result<Output, Box<dyn Error>> {
|
||||
let dl = reqwest::get(u).await?;
|
||||
let bytes = dl.bytes().await?;
|
||||
/// Generate a video Bsky media record
|
||||
async fn generate_video_record(
|
||||
bsky: &BskyAgent,
|
||||
media_attach: Vec<Attachment>,
|
||||
) -> Result<atrium_api::app::bsky::feed::post::RecordEmbedRefs, Box<dyn Error + Send + Sync>> {
|
||||
// treat only the very first video, ignore the rest
|
||||
let media = &media_attach[0];
|
||||
let blob = upload_media(false, bsky, &media.url).await?;
|
||||
|
||||
let record = bsky.api.com.atproto.repo.upload_blob(bytes.into()).await?;
|
||||
Ok(
|
||||
atrium_api::app::bsky::feed::post::RecordEmbedRefs::AppBskyEmbedVideoMain(Box::new(
|
||||
atrium_api::app::bsky::embed::video::MainData {
|
||||
alt: media.description.clone(),
|
||||
aspect_ratio: convert_aspect_ratio(
|
||||
&media.meta.as_ref().and_then(|m| m.original.clone()),
|
||||
),
|
||||
captions: None,
|
||||
video: blob.data.blob,
|
||||
}
|
||||
.into(),
|
||||
)),
|
||||
)
|
||||
}
|
||||
|
||||
async fn upload_media(
|
||||
is_image: bool,
|
||||
bsky: &BskyAgent,
|
||||
u: &str,
|
||||
) -> Result<Output, Box<dyn Error + Send + Sync>> {
|
||||
let dl = reqwest::get(u).await?;
|
||||
let content_length = dl.content_length().ok_or("Content length unavailable")?;
|
||||
let bytes = if content_length <= 1_000_000 || !is_image {
|
||||
dl.bytes().await?.as_ref().to_vec()
|
||||
} else {
|
||||
// this is an image and it’s over 1Mb long
|
||||
debug!("Img file too large: {}", content_length);
|
||||
// defaults to 95% quality for WebP compression
|
||||
let mut default_quality = 95f32;
|
||||
let img = ImageReader::new(Cursor::new(dl.bytes().await?))
|
||||
.with_guessed_format()?
|
||||
.decode()?;
|
||||
let encoder: Encoder = Encoder::from_image(&img)?;
|
||||
let mut webp: WebPMemory = encoder.encode(default_quality);
|
||||
|
||||
while webp.len() > 1_000_000 {
|
||||
debug!("Img file too large at {}%, reducing…", default_quality);
|
||||
default_quality -= 5.0;
|
||||
webp = encoder.encode(default_quality);
|
||||
}
|
||||
|
||||
webp.to_vec()
|
||||
};
|
||||
|
||||
let record = bsky.api.com.atproto.repo.upload_blob(bytes).await?;
|
||||
|
||||
Ok(record)
|
||||
}
|
||||
@@ -190,12 +401,13 @@ mod tests {
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_build_post_record() {
|
||||
let text = "@factornews@piaille.fr Retrouvez-nous ici https://www.nintendojo.fr/articles/editos/le-mod-renovation-de-8bitdo-pour-manette-n64 et là https://www.nintendojo.fr/articles/analyses/vite-vu/vite-vu-morbid-the-lords-of-ire et un lien très court http://vsl.ie/TaMere";
|
||||
let expected_text = "@factornews@piaille.fr Retrouvez-nous ici www.nintendojo.fr/articles… et là www.nintendojo.fr/articles… et un lien très court vsl.ie/TaMere";
|
||||
let text = "@factornews@piaille.fr Retrouvez-nous ici https://www.nintendojo.fr/articles/editos/le-mod-renovation-de-8bitdo-pour-manette-n64 et là https://www.nintendojo.fr/articles/analyses/vite-vu/vite-vu-morbid-the-lords-of-ire et un lien très court http://vsl.ie/TaMere et un autre https://p.nintendojo.fr/w/kV3CBbKKt1nPEChHhZiNve + http://www.xxx.com + https://www.youtube.com/watch?v=dQw4w9WgXcQ&pp=ygUJcmljayByb2xs";
|
||||
let expected_text = "@factornews@piaille.fr Retrouvez-nous ici nintendojo.fr/articles/edi… et là nintendojo.fr/articles/ana… et un lien très court vsl.ie/TaMere et un autre p.nintendojo.fr/w/kV3CBbKK… + xxx.com + youtube.com/watch?v=dQw4w9…";
|
||||
|
||||
let bsky_conf = BlueskyConfig {
|
||||
handle: "tamerelol.bsky.social".to_string(),
|
||||
password: "dtc".to_string(),
|
||||
config_path: "nope".to_string(),
|
||||
};
|
||||
|
||||
let created_record_data = build_post_record(&bsky_conf, text, &None, None, &None)
|
||||
|
||||
@@ -5,21 +5,23 @@ use std::fs::read_to_string;
|
||||
pub struct Config {
|
||||
pub oolatoocs: OolatoocsConfig,
|
||||
pub mastodon: MastodonConfig,
|
||||
pub twitter: TwitterConfig,
|
||||
pub bluesky: BlueskyConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize, Clone)]
|
||||
pub struct TwitterConfig {
|
||||
pub consumer_key: String,
|
||||
pub consumer_secret: String,
|
||||
pub oauth_token: String,
|
||||
pub oauth_token_secret: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
pub struct OolatoocsConfig {
|
||||
pub db_path: String,
|
||||
#[serde(default)]
|
||||
pub remove_hashtags: bool,
|
||||
}
|
||||
|
||||
impl Default for OolatoocsConfig {
|
||||
fn default() -> Self {
|
||||
OolatoocsConfig {
|
||||
db_path: "/var/lib/oolatoocs/db".to_string(),
|
||||
remove_hashtags: false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
@@ -35,6 +37,7 @@ pub struct MastodonConfig {
|
||||
pub struct BlueskyConfig {
|
||||
pub handle: String,
|
||||
pub password: String,
|
||||
pub config_path: String,
|
||||
}
|
||||
|
||||
/// parses TOML file into Config struct
|
||||
|
||||
135
src/lib.rs
135
src/lib.rs
@@ -1,4 +1,3 @@
|
||||
use bsky_sdk::BskyAgent;
|
||||
use log::debug;
|
||||
|
||||
mod error;
|
||||
@@ -8,7 +7,7 @@ mod config;
|
||||
pub use config::{parse_toml, Config};
|
||||
|
||||
mod state;
|
||||
use state::{delete_state, read_all_state, read_state, write_state, TootTweetRecord};
|
||||
use state::{delete_state, read_all_state, read_state, write_state, TootRecord};
|
||||
pub use state::{init_db, migrate_db};
|
||||
|
||||
mod mastodon;
|
||||
@@ -18,11 +17,8 @@ use mastodon::{get_mastodon_instance, get_mastodon_timeline_since, get_status_ed
|
||||
mod utils;
|
||||
use utils::{generate_multi_tweets, strip_everything};
|
||||
|
||||
mod twitter;
|
||||
use twitter::{delete_tweet, generate_media_ids, post_tweet, transform_poll};
|
||||
|
||||
mod bsky;
|
||||
use bsky::{build_post_record, generate_media_records, BskyReply};
|
||||
use bsky::{build_post_record, generate_embed_records, get_session, BskyReply};
|
||||
|
||||
use rusqlite::Connection;
|
||||
|
||||
@@ -31,12 +27,12 @@ pub async fn run(config: &Config) {
|
||||
let conn = Connection::open(&config.oolatoocs.db_path)
|
||||
.unwrap_or_else(|e| panic!("Cannot open DB: {}", e));
|
||||
|
||||
let mastodon = get_mastodon_instance(&config.mastodon);
|
||||
let mastodon = get_mastodon_instance(&config.mastodon)
|
||||
.unwrap_or_else(|e| panic!("Cannot instantiate Mastodon: {}", e));
|
||||
|
||||
let bluesky = BskyAgent::builder()
|
||||
.build()
|
||||
let bluesky = get_session(&config.bluesky)
|
||||
.await
|
||||
.unwrap_or_else(|e| panic!("Can’t build Bsky Agent: {}", e));
|
||||
.unwrap_or_else(|e| panic!("Cannot get Bsky session: {}", e));
|
||||
|
||||
let last_entry =
|
||||
read_state(&conn, None).unwrap_or_else(|e| panic!("Cannot get last toot id: {}", e));
|
||||
@@ -50,27 +46,13 @@ pub async fn run(config: &Config) {
|
||||
// a date has been found
|
||||
if d > t.datetime.unwrap() {
|
||||
debug!("Last toot date is posterior to the previously written tweet, deleting…");
|
||||
let (local_tweet_ids, local_record_uris) = read_all_state(&conn, t.toot_id)
|
||||
.unwrap_or_else(|e| {
|
||||
let local_record_uris =
|
||||
read_all_state(&conn, t.toot_id).unwrap_or_else(|e| {
|
||||
panic!(
|
||||
"Cannot fetch all tweets associated with Toot ID {}: {}",
|
||||
"Cannot fetch all records associated with Toot ID {}: {}",
|
||||
t.toot_id, e
|
||||
)
|
||||
});
|
||||
for local_tweet_id in local_tweet_ids.into_iter() {
|
||||
delete_tweet(&config.twitter, local_tweet_id)
|
||||
.await
|
||||
.unwrap_or_else(|e| {
|
||||
panic!("Cannot delete Tweet ID ({}): {}", t.tweet_id, e)
|
||||
});
|
||||
}
|
||||
|
||||
debug!("Create Bsky session prior to deletion");
|
||||
bluesky
|
||||
.login(&config.bluesky.handle, &config.bluesky.password)
|
||||
.await
|
||||
.unwrap_or_else(|e| panic!("Cannot login to Bsky: {}", e));
|
||||
|
||||
for local_record_uri in local_record_uris.into_iter() {
|
||||
bluesky
|
||||
.delete_record(&local_record_uri)
|
||||
@@ -104,51 +86,32 @@ pub async fn run(config: &Config) {
|
||||
}
|
||||
|
||||
// form tweet_content and strip everything useless in it
|
||||
let Ok(mut tweet_content) = strip_everything(&toot.content, &toot.tags) else {
|
||||
let toot_tags: Vec<megalodon::entities::status::Tag> =
|
||||
match &config.oolatoocs.remove_hashtags {
|
||||
true => toot.tags.clone(),
|
||||
false => vec![],
|
||||
};
|
||||
let Ok(mut tweet_content) =
|
||||
strip_everything(&toot.content, &toot_tags, &config.mastodon.base)
|
||||
else {
|
||||
continue; // skip in case we can’t strip something
|
||||
};
|
||||
|
||||
debug!("Now we need a valid Bsky session, creating it");
|
||||
if bluesky.api.com.atproto.server.get_session().await.is_err() {
|
||||
bluesky
|
||||
.login(&config.bluesky.handle, &config.bluesky.password)
|
||||
.await
|
||||
.unwrap_or_else(|e| panic!("Cannot login to Bsky: {}", e));
|
||||
}
|
||||
|
||||
// threads if necessary
|
||||
let (mut tweet_reply_to, mut record_reply_to) = toot
|
||||
.in_reply_to_id
|
||||
.and_then(|t| {
|
||||
read_state(&conn, Some(t.parse::<u64>().unwrap()))
|
||||
.ok()
|
||||
.flatten()
|
||||
.map(|s| {
|
||||
(
|
||||
s.tweet_id,
|
||||
BskyReply {
|
||||
record_uri: s.record_uri.to_owned(),
|
||||
root_record_uri: s.root_record_uri.to_owned(),
|
||||
},
|
||||
)
|
||||
})
|
||||
})
|
||||
.unzip();
|
||||
let mut record_reply_to = toot.in_reply_to_id.and_then(|t| {
|
||||
read_state(&conn, Some(t.parse::<u64>().unwrap()))
|
||||
.ok()
|
||||
.flatten()
|
||||
.map(|s| BskyReply {
|
||||
record_uri: s.record_uri.to_owned(),
|
||||
root_record_uri: s.root_record_uri.to_owned(),
|
||||
})
|
||||
});
|
||||
|
||||
// if the toot is too long, we cut it in half here
|
||||
if let Some((first_half, second_half)) = generate_multi_tweets(&tweet_content) {
|
||||
tweet_content = second_half;
|
||||
// post the first half
|
||||
let tweet_reply_id =
|
||||
post_tweet(&config.twitter, &first_half, vec![], tweet_reply_to, None)
|
||||
.await
|
||||
.unwrap_or_else(|e| {
|
||||
panic!(
|
||||
"Cannot post the first half of {} for Twitter: {}",
|
||||
&toot.id, e
|
||||
)
|
||||
});
|
||||
|
||||
let record = build_post_record(
|
||||
&config.bluesky,
|
||||
&first_half,
|
||||
@@ -167,9 +130,8 @@ pub async fn run(config: &Config) {
|
||||
// write it to db
|
||||
write_state(
|
||||
&conn,
|
||||
TootTweetRecord {
|
||||
TootRecord {
|
||||
toot_id: toot.id.parse::<u64>().unwrap(),
|
||||
tweet_id: tweet_reply_id,
|
||||
record_uri: record_reply_id.data.uri.to_owned(),
|
||||
root_record_uri: record_reply_to
|
||||
.as_ref()
|
||||
@@ -181,8 +143,8 @@ pub async fn run(config: &Config) {
|
||||
)
|
||||
.unwrap_or_else(|e| {
|
||||
panic!(
|
||||
"Cannot store Toot/Tweet/Record ({}/{}/{}): {}",
|
||||
&toot.id, tweet_reply_id, &record_reply_id.data.uri, e
|
||||
"Cannot store Toot/Tweet/Record ({}/{}): {}",
|
||||
&toot.id, &record_reply_id.data.uri, e
|
||||
)
|
||||
});
|
||||
|
||||
@@ -194,33 +156,33 @@ pub async fn run(config: &Config) {
|
||||
v.root_record_uri.clone()
|
||||
}),
|
||||
});
|
||||
|
||||
tweet_reply_to = Some(tweet_reply_id);
|
||||
};
|
||||
|
||||
// treats poll if any
|
||||
let in_poll = toot.poll.map(|p| transform_poll(&p));
|
||||
// get quote_id if any
|
||||
let quote_id = match toot.reblog {
|
||||
Some(r) => match read_state(&conn, Some(r.id.parse::<u64>().unwrap())) {
|
||||
Ok(q) => q.map(|x| x.record_uri.to_owned()),
|
||||
_ => None,
|
||||
},
|
||||
None => None,
|
||||
};
|
||||
|
||||
// treats medias
|
||||
let record_medias = generate_media_records(&bluesky, &toot.media_attachments).await;
|
||||
let tweet_medias = generate_media_ids(&config.twitter, &toot.media_attachments).await;
|
||||
|
||||
// posts corresponding tweet
|
||||
let tweet_id = post_tweet(
|
||||
&config.twitter,
|
||||
&tweet_content,
|
||||
tweet_medias,
|
||||
tweet_reply_to,
|
||||
in_poll,
|
||||
let record_embed = generate_embed_records(
|
||||
&config.bluesky,
|
||||
&bluesky,
|
||||
quote_id.as_deref(),
|
||||
&toot.media_attachments,
|
||||
&toot.card,
|
||||
)
|
||||
.await
|
||||
.unwrap_or_else(|e| panic!("Cannot Tweet {}: {}", toot.id, e));
|
||||
.unwrap_or_else(|e| panic!("Cannot embed record for {}: {}", &toot.id, e));
|
||||
|
||||
// posts corresponding tweet
|
||||
let record = build_post_record(
|
||||
&config.bluesky,
|
||||
&tweet_content,
|
||||
&toot.language,
|
||||
record_medias,
|
||||
record_embed,
|
||||
&record_reply_to,
|
||||
)
|
||||
.await
|
||||
@@ -234,9 +196,8 @@ pub async fn run(config: &Config) {
|
||||
// writes the current state of the tweet
|
||||
write_state(
|
||||
&conn,
|
||||
TootTweetRecord {
|
||||
TootRecord {
|
||||
toot_id: toot.id.parse::<u64>().unwrap(),
|
||||
tweet_id,
|
||||
record_uri: created_record.data.uri.clone(),
|
||||
root_record_uri: record_reply_to
|
||||
.as_ref()
|
||||
@@ -246,6 +207,6 @@ pub async fn run(config: &Config) {
|
||||
datetime: None,
|
||||
},
|
||||
)
|
||||
.unwrap_or_else(|e| panic!("Cannot store Toot/Tweet ({}/{}): {}", &toot.id, tweet_id, e));
|
||||
.unwrap_or_else(|e| panic!("Cannot store Toot/Tweet ({}): {}", &toot.id, e));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use crate::config::MastodonConfig;
|
||||
use chrono::{DateTime, Utc};
|
||||
use megalodon::{
|
||||
entities::{Status, StatusVisibility},
|
||||
entities::{QuotedStatus, Status, StatusVisibility},
|
||||
generator,
|
||||
mastodon::mastodon::Mastodon,
|
||||
megalodon::AppInputOptions,
|
||||
@@ -12,12 +12,12 @@ use std::error::Error;
|
||||
use std::io::stdin;
|
||||
|
||||
/// Get Mastodon Object instance
|
||||
pub fn get_mastodon_instance(config: &MastodonConfig) -> Mastodon {
|
||||
Mastodon::new(
|
||||
pub fn get_mastodon_instance(config: &MastodonConfig) -> Result<Mastodon, Box<dyn Error>> {
|
||||
Ok(Mastodon::new(
|
||||
config.base.to_string(),
|
||||
Some(config.token.to_string()),
|
||||
None,
|
||||
)
|
||||
)?)
|
||||
}
|
||||
|
||||
/// Get the edited_at field from the specified toot
|
||||
@@ -55,9 +55,19 @@ pub async fn get_mastodon_timeline_since(
|
||||
.clone()
|
||||
.is_some_and(|r| r == t.account.id)
|
||||
})
|
||||
.filter(|t| t.visibility == StatusVisibility::Public) // excludes everything that isn’t
|
||||
// public
|
||||
.filter(|t| t.reblog.is_none()) // excludes reblogs
|
||||
.filter(|t| t.visibility == StatusVisibility::Public) // excludes everything that isn’t public
|
||||
.filter(|t| t.reblog.is_none()) // exclude reblogs
|
||||
.filter(|t| {
|
||||
// exclude quotes that aren’t ours
|
||||
t.quote.is_none()
|
||||
|| t.quote.clone().is_some_and(|r| match r {
|
||||
QuotedStatus::Quote(q) => q
|
||||
.quoted_status
|
||||
.clone()
|
||||
.is_some_and(|iq| iq.account.id == t.account.id),
|
||||
_ => false,
|
||||
})
|
||||
})
|
||||
.cloned()
|
||||
.collect();
|
||||
|
||||
@@ -71,7 +81,8 @@ pub async fn get_mastodon_timeline_since(
|
||||
/// Most of this function is a direct copy/paste of the official `elefren` crate
|
||||
#[tokio::main]
|
||||
pub async fn register(host: &str) {
|
||||
let mastodon = generator(megalodon::SNS::Mastodon, host.to_string(), None, None);
|
||||
let mastodon = generator(megalodon::SNS::Mastodon, host.to_string(), None, None)
|
||||
.expect("Cannot build Mastodon generator object");
|
||||
|
||||
let options = AppInputOptions {
|
||||
redirect_uris: None,
|
||||
|
||||
140
src/state.rs
140
src/state.rs
@@ -5,11 +5,9 @@ use std::error::Error;
|
||||
|
||||
/// Struct for each query line
|
||||
#[derive(Debug)]
|
||||
pub struct TootTweetRecord {
|
||||
pub struct TootRecord {
|
||||
// Mastodon part
|
||||
pub toot_id: u64,
|
||||
// Twitter part
|
||||
pub tweet_id: u64,
|
||||
// Bluesky part
|
||||
pub record_uri: String,
|
||||
pub root_record_uri: String,
|
||||
@@ -20,44 +18,36 @@ pub struct TootTweetRecord {
|
||||
pub fn delete_state(conn: &Connection, toot_id: u64) -> Result<(), Box<dyn Error>> {
|
||||
debug!("Deleting Toot ID {}", toot_id);
|
||||
conn.execute(
|
||||
&format!("DELETE FROM toot_tweet_record WHERE toot_id = {}", toot_id),
|
||||
&format!("DELETE FROM toot_record WHERE toot_id = {}", toot_id),
|
||||
[],
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Retrieves all tweets associated to a toot in the form of a vector
|
||||
pub fn read_all_state(
|
||||
conn: &Connection,
|
||||
toot_id: u64,
|
||||
) -> Result<(Vec<u64>, Vec<String>), Box<dyn Error>> {
|
||||
pub fn read_all_state(conn: &Connection, toot_id: u64) -> Result<Vec<String>, Box<dyn Error>> {
|
||||
let query = format!(
|
||||
"SELECT tweet_id, record_uri FROM toot_tweet_record WHERE toot_id = {};",
|
||||
"SELECT record_uri FROM toot_record WHERE toot_id = {};",
|
||||
toot_id
|
||||
);
|
||||
let mut stmt = conn.prepare(&query)?;
|
||||
let mut rows = stmt.query([])?;
|
||||
|
||||
let mut tweet_v: Vec<u64> = Vec::new();
|
||||
let mut record_v: Vec<String> = Vec::new();
|
||||
while let Some(row) = rows.next()? {
|
||||
tweet_v.push(row.get(0)?);
|
||||
record_v.push(row.get(1)?);
|
||||
record_v.push(row.get(0)?);
|
||||
}
|
||||
|
||||
Ok((tweet_v, record_v))
|
||||
Ok(record_v)
|
||||
}
|
||||
|
||||
/// if None is passed, read the last tweet from DB
|
||||
/// if a tweet_id is passed, read this particular tweet from DB
|
||||
pub fn read_state(
|
||||
conn: &Connection,
|
||||
s: Option<u64>,
|
||||
) -> Result<Option<TootTweetRecord>, Box<dyn Error>> {
|
||||
pub fn read_state(conn: &Connection, s: Option<u64>) -> Result<Option<TootRecord>, Box<dyn Error>> {
|
||||
debug!("Reading toot_id {:?}", s);
|
||||
let begin_query = "SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_tweet_record";
|
||||
let begin_query = "SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_record";
|
||||
let query: String = match s {
|
||||
Some(i) => format!("{begin_query} WHERE toot_id = {i} ORDER BY tweet_id DESC LIMIT 1"),
|
||||
Some(i) => format!("{begin_query} WHERE toot_id = {i} ORDER BY record_uri DESC LIMIT 1"),
|
||||
None => format!("{begin_query} ORDER BY toot_id DESC LIMIT 1"),
|
||||
};
|
||||
|
||||
@@ -65,9 +55,8 @@ pub fn read_state(
|
||||
|
||||
let t = stmt
|
||||
.query_row([], |row| {
|
||||
Ok(TootTweetRecord {
|
||||
Ok(TootRecord {
|
||||
toot_id: row.get("toot_id")?,
|
||||
tweet_id: row.get("tweet_id")?,
|
||||
record_uri: row.get("record_uri")?,
|
||||
root_record_uri: row.get("root_record_uri")?,
|
||||
datetime: Some(
|
||||
@@ -81,11 +70,11 @@ pub fn read_state(
|
||||
}
|
||||
|
||||
/// Writes last treated tweet id and toot id to the db
|
||||
pub fn write_state(conn: &Connection, t: TootTweetRecord) -> Result<(), Box<dyn Error>> {
|
||||
pub fn write_state(conn: &Connection, t: TootRecord) -> Result<(), Box<dyn Error>> {
|
||||
debug!("Write struct {:?}", t);
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, record_uri, root_record_uri) VALUES (?1, ?2, ?3, ?4)",
|
||||
params![t.toot_id, t.tweet_id, t.record_uri, t.root_record_uri],
|
||||
"INSERT INTO toot_record (toot_id, record_uri, root_record_uri) VALUES (?1, ?2, ?3)",
|
||||
params![t.toot_id, t.record_uri, t.root_record_uri],
|
||||
)?;
|
||||
|
||||
Ok(())
|
||||
@@ -93,17 +82,13 @@ pub fn write_state(conn: &Connection, t: TootTweetRecord) -> Result<(), Box<dyn
|
||||
|
||||
/// Initiates the DB from path
|
||||
pub fn init_db(d: &str) -> Result<(), Box<dyn Error>> {
|
||||
debug!(
|
||||
"{}",
|
||||
format!("Initializing DB for {}", env!("CARGO_PKG_NAME"))
|
||||
);
|
||||
debug!("Initializing DB for {}", env!("CARGO_PKG_NAME"));
|
||||
let conn = Connection::open(d)?;
|
||||
|
||||
conn.execute(
|
||||
"CREATE TABLE IF NOT EXISTS toot_tweet_record (
|
||||
"CREATE TABLE IF NOT EXISTS toot_record (
|
||||
toot_id INTEGER,
|
||||
tweet_id INTEGER PRIMARY KEY,
|
||||
record_uri VARCHAR(128) DEFAULT '',
|
||||
record_uri VARCHAR(128) PRIMARY KEY,
|
||||
root_record_uri VARCHAR(128) DEFAULT '',
|
||||
datetime INTEGER DEFAULT CURRENT_TIMESTAMP
|
||||
)",
|
||||
@@ -113,20 +98,20 @@ pub fn init_db(d: &str) -> Result<(), Box<dyn Error>> {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Migrate DB from 1.6+ to 3+
|
||||
/// Migrate DB from 3+ to 4+
|
||||
pub fn migrate_db(d: &str) -> Result<(), Box<dyn Error>> {
|
||||
debug!("Migration DB for Oolatoocs");
|
||||
|
||||
let conn = Connection::open(d)?;
|
||||
|
||||
let res = conn.execute("SELECT datetime FROM toot_tweet_record;", []);
|
||||
let res = conn.execute("SELECT datetime FROM toot_record;", []);
|
||||
|
||||
// If the column can be selected then, it’s OK
|
||||
// if not, see if the error is a missing column and add it
|
||||
match res {
|
||||
Err(e) => match e.to_string().as_str() {
|
||||
"no such table: toot_tweet_record" => migrate_db_alter_table(&conn), // table does not exist
|
||||
"Execute returned results - did you mean to call query?" => Ok(()), // return results,
|
||||
"no such table: toot_record" => migrate_db_alter_table(&conn), // table does not exist
|
||||
"Execute returned results - did you mean to call query?" => Ok(()), // return results,
|
||||
// column does
|
||||
// exist
|
||||
_ => Err(e.into()),
|
||||
@@ -139,10 +124,9 @@ pub fn migrate_db(d: &str) -> Result<(), Box<dyn Error>> {
|
||||
fn migrate_db_alter_table(c: &Connection) -> Result<(), Box<dyn Error>> {
|
||||
// create the new table
|
||||
c.execute(
|
||||
"CREATE TABLE IF NOT EXISTS toot_tweet_record (
|
||||
"CREATE TABLE IF NOT EXISTS toot_record (
|
||||
toot_id INTEGER,
|
||||
tweet_id INTEGER PRIMARY KEY,
|
||||
record_uri VARCHAR(128) DEFAULT '',
|
||||
record_uri VARCHAR(128) PRIMARY KEY,
|
||||
root_record_uri VARCHAR(128) DEFAULT '',
|
||||
datetime INTEGER DEFAULT CURRENT_TIMESTAMP
|
||||
)",
|
||||
@@ -151,13 +135,14 @@ fn migrate_db_alter_table(c: &Connection) -> Result<(), Box<dyn Error>> {
|
||||
|
||||
// copy data from the old table
|
||||
c.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, datetime)
|
||||
SELECT toot_id, tweet_id, datetime FROM tweet_to_toot;",
|
||||
"INSERT INTO toot_record (toot_id, record_uri, root_record_uri, datetime)
|
||||
SELECT toot_id, record_uri, root_record_uri, datetime FROM toot_tweet_record
|
||||
WHERE record_uri != '';",
|
||||
[],
|
||||
)?;
|
||||
|
||||
// drop the old table
|
||||
c.execute("DROP TABLE IF EXISTS tweet_to_toot;", [])?;
|
||||
c.execute("DROP TABLE IF EXISTS toot_tweet_record;", [])?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -178,8 +163,7 @@ mod tests {
|
||||
|
||||
// open said file
|
||||
let conn = Connection::open(d).unwrap();
|
||||
conn.execute("SELECT * from toot_tweet_record;", [])
|
||||
.unwrap();
|
||||
conn.execute("SELECT * from toot_record;", []).unwrap();
|
||||
|
||||
remove_file(d).unwrap();
|
||||
}
|
||||
@@ -194,9 +178,9 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (tweet_id, toot_id)
|
||||
"INSERT INTO toot_record (record_uri, toot_id)
|
||||
VALUES
|
||||
(100, 1001);",
|
||||
('a', 1001);",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -214,9 +198,8 @@ mod tests {
|
||||
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
let t_in = TootTweetRecord {
|
||||
let t_in = TootRecord {
|
||||
toot_id: 987654321,
|
||||
tweet_id: 123456789,
|
||||
record_uri: "a".to_string(),
|
||||
root_record_uri: "c".to_string(),
|
||||
datetime: None,
|
||||
@@ -225,14 +208,13 @@ mod tests {
|
||||
write_state(&conn, t_in).unwrap();
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_tweet_record;")
|
||||
.prepare("SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_record;")
|
||||
.unwrap();
|
||||
|
||||
let t_out = stmt
|
||||
.query_row([], |row| {
|
||||
Ok(TootTweetRecord {
|
||||
Ok(TootRecord {
|
||||
toot_id: row.get("toot_id").unwrap(),
|
||||
tweet_id: row.get("tweet_id").unwrap(),
|
||||
record_uri: row.get("record_uri").unwrap(),
|
||||
root_record_uri: row.get("root_record_uri").unwrap(),
|
||||
datetime: Some(
|
||||
@@ -243,7 +225,6 @@ mod tests {
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(t_out.toot_id, 987654321);
|
||||
assert_eq!(t_out.tweet_id, 123456789);
|
||||
assert_eq!(t_out.record_uri, "a".to_string());
|
||||
assert_eq!(t_out.root_record_uri, "c".to_string());
|
||||
|
||||
@@ -259,10 +240,10 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, record_uri)
|
||||
"INSERT INTO toot_record (toot_id, record_uri)
|
||||
VALUES
|
||||
(101, 1001, 'abc'),
|
||||
(102, 1002, 'def');",
|
||||
(101, 'abc'),
|
||||
(102, 'def');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -272,7 +253,6 @@ mod tests {
|
||||
remove_file(d).unwrap();
|
||||
|
||||
assert_eq!(t_out.toot_id, 102);
|
||||
assert_eq!(t_out.tweet_id, 1002);
|
||||
assert_eq!(t_out.record_uri, "def".to_string());
|
||||
}
|
||||
|
||||
@@ -300,9 +280,9 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, record_uri)
|
||||
"INSERT INTO toot_record (toot_id, record_uri)
|
||||
VALUES
|
||||
(100, 1000, 'abc');",
|
||||
(100, 'abc');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -323,9 +303,9 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, record_uri)
|
||||
"INSERT INTO toot_record (toot_id, record_uri)
|
||||
VALUES
|
||||
(100, 1000, 'abc');",
|
||||
(100, 'abc');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -335,7 +315,6 @@ mod tests {
|
||||
remove_file(d).unwrap();
|
||||
|
||||
assert_eq!(t_out.toot_id, 100);
|
||||
assert_eq!(t_out.tweet_id, 1000);
|
||||
assert_eq!(t_out.record_uri, "abc".to_string());
|
||||
}
|
||||
|
||||
@@ -348,10 +327,10 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, record_uri)
|
||||
"INSERT INTO toot_record (toot_id, record_uri)
|
||||
VALUES
|
||||
(1000, 100, 'abc'),
|
||||
(1000, 101, 'def');",
|
||||
(1000, 'abc'),
|
||||
(1000, 'def');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -361,7 +340,6 @@ mod tests {
|
||||
remove_file(d).unwrap();
|
||||
|
||||
assert_eq!(t_out.toot_id, 1000);
|
||||
assert_eq!(t_out.tweet_id, 101);
|
||||
assert_eq!(t_out.record_uri, "def".to_string());
|
||||
}
|
||||
|
||||
@@ -372,9 +350,11 @@ mod tests {
|
||||
|
||||
let conn = Connection::open(d).unwrap();
|
||||
conn.execute(
|
||||
"CREATE TABLE IF NOT EXISTS tweet_to_toot (
|
||||
tweet_id INTEGER,
|
||||
toot_id INTEGER PRIMARY KEY,
|
||||
"CREATE TABLE IF NOT EXISTS toot_tweet_record (
|
||||
toot_id INTEGER,
|
||||
tweet_id INTEGER PRIMARY KEY,
|
||||
record_uri VARCHAR(128) DEFAULT '',
|
||||
root_record_uri VARCHAR(128) DEFAULT '',
|
||||
datetime INTEGER DEFAULT CURRENT_TIMESTAMP
|
||||
)",
|
||||
[],
|
||||
@@ -382,7 +362,7 @@ mod tests {
|
||||
.unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO tweet_to_toot (tweet_id, toot_id) VALUES (0, 0), (1, 1);",
|
||||
"INSERT INTO toot_tweet_record (tweet_id, toot_id, record_uri) VALUES (0, 0, ''), (1, 1, 'abc');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -391,7 +371,6 @@ mod tests {
|
||||
|
||||
let last_state = read_state(&conn, None).unwrap().unwrap();
|
||||
|
||||
assert_eq!(last_state.tweet_id, 1);
|
||||
assert_eq!(last_state.toot_id, 1);
|
||||
|
||||
migrate_db(d).unwrap(); // shouldn’t do anything
|
||||
@@ -408,7 +387,7 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record(toot_id, tweet_id, record_uri) VALUES (0, 0, 'abc');",
|
||||
"INSERT INTO toot_record(toot_id, record_uri) VALUES (0, 'abc');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -416,13 +395,12 @@ mod tests {
|
||||
delete_state(&conn, 0).unwrap();
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_tweet_record;")
|
||||
.prepare("SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_record;")
|
||||
.unwrap();
|
||||
|
||||
let t_out = stmt.query_row([], |row| {
|
||||
Ok(TootTweetRecord {
|
||||
Ok(TootRecord {
|
||||
toot_id: row.get("toot_id").unwrap(),
|
||||
tweet_id: row.get("tweet_id").unwrap(),
|
||||
record_uri: row.get("record_uri").unwrap(),
|
||||
root_record_uri: row.get("root_record_uri").unwrap(),
|
||||
datetime: Some(
|
||||
@@ -434,7 +412,7 @@ mod tests {
|
||||
assert!(t_out.is_err_and(|x| x == rusqlite::Error::QueryReturnedNoRows));
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record(toot_id, tweet_id, record_uri) VALUES(42, 102, 'abc'), (42, 103, 'def');",
|
||||
"INSERT INTO toot_record(toot_id, record_uri) VALUES(42, 'abc'), (42, 'def');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
@@ -442,13 +420,12 @@ mod tests {
|
||||
delete_state(&conn, 42).unwrap();
|
||||
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_tweet_record;")
|
||||
.prepare("SELECT *, UNIXEPOCH(datetime) AS unix_datetime FROM toot_record;")
|
||||
.unwrap();
|
||||
|
||||
let t_out = stmt.query_row([], |row| {
|
||||
Ok(TootTweetRecord {
|
||||
Ok(TootRecord {
|
||||
toot_id: row.get("toot_id").unwrap(),
|
||||
tweet_id: row.get("tweet_id").unwrap(),
|
||||
record_uri: row.get("record_uri").unwrap(),
|
||||
root_record_uri: row.get("root_record_uri").unwrap(),
|
||||
datetime: Some(
|
||||
@@ -471,16 +448,13 @@ mod tests {
|
||||
let conn = Connection::open(d).unwrap();
|
||||
|
||||
conn.execute(
|
||||
"INSERT INTO toot_tweet_record (toot_id, tweet_id, record_uri) VALUES (42, 102, 'abc'), (42, 103, 'def'), (43, 105, 'ghi');",
|
||||
"INSERT INTO toot_record (toot_id, record_uri) VALUES (42, 'abc'), (42, 'def'), (43, 'ghi');",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let (tweet_v1, record_v1) = read_all_state(&conn, 43).unwrap();
|
||||
let (tweet_v2, record_v2) = read_all_state(&conn, 42).unwrap();
|
||||
|
||||
assert_eq!(tweet_v1, vec![105]);
|
||||
assert_eq!(tweet_v2, vec![102, 103]);
|
||||
let record_v1 = read_all_state(&conn, 43).unwrap();
|
||||
let record_v2 = read_all_state(&conn, 42).unwrap();
|
||||
|
||||
assert_eq!(record_v1, vec!["ghi".to_string()]);
|
||||
assert_eq!(record_v2, vec!["abc".to_string(), "def".to_string()]);
|
||||
|
||||
556
src/twitter.rs
556
src/twitter.rs
@@ -1,556 +0,0 @@
|
||||
use crate::config::TwitterConfig;
|
||||
use crate::error::OolatoocsError;
|
||||
use chrono::Utc;
|
||||
use futures::{stream, StreamExt};
|
||||
use log::{debug, error, warn};
|
||||
use megalodon::entities::{
|
||||
attachment::{Attachment, AttachmentType},
|
||||
Poll,
|
||||
};
|
||||
use oauth1_request::Token;
|
||||
use reqwest::{
|
||||
multipart::{Form, Part},
|
||||
Body, Client,
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::{error::Error, ops::Not};
|
||||
use tokio::time::{sleep, Duration};
|
||||
|
||||
const TWITTER_API_TWEET_URL: &str = "https://api.twitter.com/2/tweets";
|
||||
const TWITTER_UPLOAD_MEDIA_URL: &str = "https://upload.twitter.com/1.1/media/upload.json";
|
||||
const TWITTER_METADATA_MEDIA_URL: &str =
|
||||
"https://upload.twitter.com/1.1/media/metadata/create.json";
|
||||
|
||||
// I don’t know, don’t ask me
|
||||
#[derive(oauth1_request::Request)]
|
||||
struct EmptyRequest {}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
struct Tweet {
|
||||
text: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
media: Option<TweetMediasIds>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
reply: Option<TweetReply>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
poll: Option<TweetPoll>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
struct TweetMediasIds {
|
||||
media_ids: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
struct TweetReply {
|
||||
in_reply_to_tweet_id: String,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
pub struct TweetPoll {
|
||||
pub options: Vec<String>,
|
||||
pub duration_minutes: u16,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
struct TweetResponse {
|
||||
data: TweetResponseData,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
struct TweetResponseData {
|
||||
id: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
struct UploadMediaResponse {
|
||||
media_id: u64,
|
||||
processing_info: Option<UploadMediaResponseProcessingInfo>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
struct UploadMediaResponseProcessingInfo {
|
||||
state: UploadMediaResponseProcessingInfoState,
|
||||
check_after_secs: Option<u64>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Debug)]
|
||||
enum UploadMediaResponseProcessingInfoState {
|
||||
#[serde(rename = "failed")]
|
||||
Failed,
|
||||
#[serde(rename = "succeeded")]
|
||||
Succeeded,
|
||||
#[serde(rename = "pending")]
|
||||
Pending,
|
||||
#[serde(rename = "in_progress")]
|
||||
InProgress,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
struct MediaMetadata {
|
||||
media_id: u64,
|
||||
alt_text: MediaMetadataAltText,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
struct MediaMetadataAltText {
|
||||
text: String,
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug, oauth1_request::Request)]
|
||||
struct UploadMediaCommand {
|
||||
command: String,
|
||||
media_id: String,
|
||||
}
|
||||
|
||||
/// This function returns the OAuth1 Token object from TwitterConfig
|
||||
fn get_token(config: &TwitterConfig) -> Token {
|
||||
oauth1_request::Token::from_parts(
|
||||
config.consumer_key.to_string(),
|
||||
config.consumer_secret.to_string(),
|
||||
config.oauth_token.to_string(),
|
||||
config.oauth_token_secret.to_string(),
|
||||
)
|
||||
}
|
||||
|
||||
/// This functions deletes a tweet, given its id
|
||||
pub async fn delete_tweet(config: &TwitterConfig, id: u64) -> Result<(), Box<dyn Error>> {
|
||||
debug!("Deleting Tweet {}", id);
|
||||
let empty_request = EmptyRequest {}; // Why? Because fuck you, that’s why!
|
||||
let token = get_token(config);
|
||||
let delete_uri = format!("{}/{}", TWITTER_API_TWEET_URL, id);
|
||||
|
||||
let client = Client::new();
|
||||
let res = client
|
||||
.delete(&delete_uri)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::delete(
|
||||
&delete_uri,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if !res.status().is_success() {
|
||||
return Err(OolatoocsError::new(&format!("Cannot delete Tweet {}", id)).into());
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// This function generates a media_ids vec to be used by Twitter
|
||||
pub async fn generate_media_ids(config: &TwitterConfig, media_attach: &[Attachment]) -> Vec<u64> {
|
||||
let mut medias: Vec<u64> = vec![];
|
||||
|
||||
let media_attachments = media_attach.to_owned();
|
||||
let mut stream = stream::iter(media_attachments)
|
||||
.map(|media| {
|
||||
let twitter_config = config.clone();
|
||||
tokio::task::spawn(async move {
|
||||
match media.r#type {
|
||||
AttachmentType::Image => {
|
||||
upload_simple_media(&twitter_config, &media.url, &media.description).await
|
||||
}
|
||||
AttachmentType::Gifv => {
|
||||
upload_chunk_media(&twitter_config, &media.url, "tweet_gif").await
|
||||
}
|
||||
AttachmentType::Video => {
|
||||
upload_chunk_media(&twitter_config, &media.url, "tweet_video").await
|
||||
}
|
||||
_ => Err::<u64, Box<dyn Error + Send + Sync>>(
|
||||
OolatoocsError::new(&format!(
|
||||
"Cannot treat this type of media: {}",
|
||||
&media.url
|
||||
))
|
||||
.into(),
|
||||
),
|
||||
}
|
||||
})
|
||||
})
|
||||
.buffered(4);
|
||||
|
||||
while let Some(result) = stream.next().await {
|
||||
match result {
|
||||
Ok(Ok(v)) => medias.push(v),
|
||||
Ok(Err(e)) => warn!("Cannot treat media: {}", e),
|
||||
Err(e) => error!("Something went wrong when joining the main thread: {}", e),
|
||||
}
|
||||
}
|
||||
|
||||
medias
|
||||
}
|
||||
|
||||
/// This function uploads simple images from Mastodon to Twitter and returns the media id from Twitter
|
||||
async fn upload_simple_media(
|
||||
config: &TwitterConfig,
|
||||
u: &str,
|
||||
d: &Option<String>,
|
||||
) -> Result<u64, Box<dyn Error + Send + Sync>> {
|
||||
// initiate request parameters
|
||||
let empty_request = EmptyRequest {}; // Why? Because fuck you, that’s why!
|
||||
let token = get_token(config);
|
||||
|
||||
// retrieve the length and bytes stream from the given URL
|
||||
let dl = reqwest::get(u).await?;
|
||||
let content_length = dl
|
||||
.content_length()
|
||||
.ok_or(format!("Cannot get content length for {}", u))?;
|
||||
let stream = dl.bytes_stream();
|
||||
|
||||
debug!("Ref download URL: {}", u);
|
||||
|
||||
// upload the media
|
||||
let client = Client::new();
|
||||
let res = client
|
||||
.post(TWITTER_UPLOAD_MEDIA_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::post(
|
||||
TWITTER_UPLOAD_MEDIA_URL,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.multipart(Form::new().part(
|
||||
"media",
|
||||
Part::stream_with_length(Body::wrap_stream(stream), content_length),
|
||||
))
|
||||
.send()
|
||||
.await?
|
||||
.json::<UploadMediaResponse>()
|
||||
.await?;
|
||||
|
||||
debug!("Media ID: {}", res.media_id);
|
||||
|
||||
// update the metadata
|
||||
if let Some(metadata) = d {
|
||||
debug!("Metadata found! Processing…");
|
||||
metadata_create(config, res.media_id, metadata).await?;
|
||||
}
|
||||
|
||||
Ok(res.media_id)
|
||||
}
|
||||
|
||||
/// This function updates the metadata given the current media_id and token
|
||||
async fn metadata_create(
|
||||
config: &TwitterConfig,
|
||||
id: u64,
|
||||
m: &str,
|
||||
) -> Result<(), Box<dyn Error + Send + Sync>> {
|
||||
let token = get_token(config);
|
||||
let empty_request = EmptyRequest {};
|
||||
|
||||
let media_metadata = MediaMetadata {
|
||||
media_id: id,
|
||||
alt_text: MediaMetadataAltText {
|
||||
text: m.to_string(),
|
||||
},
|
||||
};
|
||||
|
||||
debug!("Metadata to process: {}", m);
|
||||
|
||||
let client = Client::new();
|
||||
let metadata = client
|
||||
.post(TWITTER_METADATA_MEDIA_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::post(
|
||||
TWITTER_METADATA_MEDIA_URL,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.json(&media_metadata)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
debug!("Metadata processed with return code: {}", metadata.status());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// This posts video/gif to Twitter and returns the media id from Twitter
|
||||
async fn upload_chunk_media(
|
||||
config: &TwitterConfig,
|
||||
u: &str,
|
||||
t: &str,
|
||||
) -> Result<u64, Box<dyn Error + Send + Sync>> {
|
||||
let empty_request = EmptyRequest {};
|
||||
let token = get_token(config);
|
||||
|
||||
// retrieve the length, type and bytes stream from the given URL
|
||||
let mut dl = reqwest::get(u).await?;
|
||||
let content_length = dl
|
||||
.content_length()
|
||||
.ok_or(format!("Cannot get content length for {}", u))?;
|
||||
let content_headers = dl.headers().clone();
|
||||
let content_type = content_headers
|
||||
.get("Content-Type")
|
||||
.ok_or(format!("Cannot get content type for {}", u))?
|
||||
.to_str()?;
|
||||
|
||||
debug!("Init the slot for uploading media: {}", u);
|
||||
// init the slot for uploading
|
||||
let client = Client::new();
|
||||
let orig_media_id = client
|
||||
.post(TWITTER_UPLOAD_MEDIA_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::post(
|
||||
TWITTER_UPLOAD_MEDIA_URL,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.multipart(
|
||||
Form::new()
|
||||
.text("command", "INIT")
|
||||
.text("media_type", content_type.to_owned())
|
||||
.text("total_bytes", content_length.to_string())
|
||||
.text("media_category", t.to_string()),
|
||||
)
|
||||
.send()
|
||||
.await?
|
||||
.json::<UploadMediaResponse>()
|
||||
.await?;
|
||||
|
||||
debug!("Slot initiated with ID: {}", orig_media_id.media_id);
|
||||
|
||||
debug!("Appending media to ID: {}", orig_media_id.media_id);
|
||||
// append the media to the corresponding slot
|
||||
let mut segment: u8 = 0;
|
||||
while let Some(chunk) = dl.chunk().await? {
|
||||
debug!(
|
||||
"Appending segment {} for media ID {}",
|
||||
segment, orig_media_id.media_id
|
||||
);
|
||||
let chunk_size: u64 = chunk.len().try_into().unwrap();
|
||||
let res = client
|
||||
.post(TWITTER_UPLOAD_MEDIA_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::post(
|
||||
TWITTER_UPLOAD_MEDIA_URL,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.multipart(
|
||||
Form::new()
|
||||
.text("command", "APPEND")
|
||||
.text("media_id", orig_media_id.media_id.to_string())
|
||||
.text("segment_index", segment.to_string())
|
||||
.part("media", Part::stream_with_length(chunk, chunk_size)),
|
||||
)
|
||||
.send()
|
||||
.await?;
|
||||
|
||||
if !res.status().is_success() {
|
||||
return Err(
|
||||
OolatoocsError::new(&format!("Cannot upload part {} of {}", segment, u)).into(),
|
||||
);
|
||||
}
|
||||
|
||||
segment += 1;
|
||||
}
|
||||
|
||||
debug!("Finalize media ID: {}", orig_media_id.media_id);
|
||||
// Finalizing task
|
||||
let fin = client
|
||||
.post(TWITTER_UPLOAD_MEDIA_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::post(
|
||||
TWITTER_UPLOAD_MEDIA_URL,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.multipart(
|
||||
Form::new()
|
||||
.text("command", "FINALIZE")
|
||||
.text("media_id", orig_media_id.media_id.to_string()),
|
||||
)
|
||||
.send()
|
||||
.await?
|
||||
.json::<UploadMediaResponse>()
|
||||
.await?;
|
||||
|
||||
if let Some(p_info) = fin.processing_info {
|
||||
if let Some(wait_sec) = p_info.check_after_secs {
|
||||
debug!(
|
||||
"Processing is not finished yet for ID {}, waiting {} secs",
|
||||
orig_media_id.media_id, wait_sec
|
||||
);
|
||||
// getting here, we have a status and a check_after_secs
|
||||
// this status can be anything but we will check it afterwards
|
||||
// whatever happens, we can wait here before proceeding
|
||||
sleep(Duration::from_secs(wait_sec)).await;
|
||||
|
||||
let command = UploadMediaCommand {
|
||||
command: "STATUS".to_string(),
|
||||
media_id: orig_media_id.media_id.to_string(),
|
||||
};
|
||||
|
||||
loop {
|
||||
debug!(
|
||||
"Checking on status for ID {} after waiting {} secs",
|
||||
orig_media_id.media_id, wait_sec
|
||||
);
|
||||
|
||||
let status = client
|
||||
.get(TWITTER_UPLOAD_MEDIA_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::get(
|
||||
TWITTER_UPLOAD_MEDIA_URL,
|
||||
&command,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.query(&command)
|
||||
.send()
|
||||
.await?
|
||||
.json::<UploadMediaResponse>()
|
||||
.await?;
|
||||
|
||||
let p_status = status.processing_info.unwrap(); // shouldn’t be None at this point
|
||||
match p_status.state {
|
||||
UploadMediaResponseProcessingInfoState::Failed => {
|
||||
debug!("Processing has failed!");
|
||||
return Err(OolatoocsError::new(&format!(
|
||||
"Upload for {} (id: {}) has failed",
|
||||
u, orig_media_id.media_id
|
||||
))
|
||||
.into());
|
||||
}
|
||||
UploadMediaResponseProcessingInfoState::Succeeded => {
|
||||
debug!("Processing has succeeded, exiting loop!");
|
||||
break;
|
||||
}
|
||||
UploadMediaResponseProcessingInfoState::Pending
|
||||
| UploadMediaResponseProcessingInfoState::InProgress => {
|
||||
debug!(
|
||||
"Processing still pending, waiting {} secs more…",
|
||||
p_status.check_after_secs.unwrap() // unwrap is safe here,
|
||||
// check_after_secs is only present
|
||||
// when status is pending or in
|
||||
// progress
|
||||
);
|
||||
sleep(Duration::from_secs(p_status.check_after_secs.unwrap())).await;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(orig_media_id.media_id)
|
||||
}
|
||||
|
||||
pub fn transform_poll(p: &Poll) -> TweetPoll {
|
||||
let poll_end_datetime = p.expires_at.unwrap(); // should be safe at this point
|
||||
let now = Utc::now();
|
||||
let diff = poll_end_datetime.signed_duration_since(now);
|
||||
|
||||
TweetPoll {
|
||||
options: p
|
||||
.options
|
||||
.iter()
|
||||
.map(|i| i.title.chars().take(25).collect::<String>())
|
||||
.collect(),
|
||||
duration_minutes: diff.num_minutes().try_into().unwrap(), // safe here, number is positive
|
||||
// and can’t be over 21600
|
||||
}
|
||||
}
|
||||
|
||||
/// This posts Tweets with all the associated medias
|
||||
pub async fn post_tweet(
|
||||
config: &TwitterConfig,
|
||||
content: &str,
|
||||
medias: Vec<u64>,
|
||||
reply_to: Option<u64>,
|
||||
poll: Option<TweetPoll>,
|
||||
) -> Result<u64, Box<dyn Error>> {
|
||||
let empty_request = EmptyRequest {}; // Why? Because fuck you, that’s why!
|
||||
let token = get_token(config);
|
||||
|
||||
let tweet = Tweet {
|
||||
text: content.to_string(),
|
||||
media: medias.is_empty().not().then(|| TweetMediasIds {
|
||||
media_ids: medias.iter().map(|m| m.to_string()).collect(),
|
||||
}),
|
||||
reply: reply_to.map(|s| TweetReply {
|
||||
in_reply_to_tweet_id: s.to_string(),
|
||||
}),
|
||||
poll,
|
||||
};
|
||||
|
||||
let client = Client::new();
|
||||
let res = client
|
||||
.post(TWITTER_API_TWEET_URL)
|
||||
.header(
|
||||
"Authorization",
|
||||
oauth1_request::post(
|
||||
TWITTER_API_TWEET_URL,
|
||||
&empty_request,
|
||||
&token,
|
||||
oauth1_request::HMAC_SHA1,
|
||||
),
|
||||
)
|
||||
.json(&tweet)
|
||||
.send()
|
||||
.await?
|
||||
.json::<TweetResponse>()
|
||||
.await?;
|
||||
|
||||
Ok(res.data.id.parse::<u64>().unwrap())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use megalodon::entities::PollOption;
|
||||
|
||||
#[test]
|
||||
fn test_transform_poll() {
|
||||
let poll = Poll {
|
||||
id: "youpi".to_string(),
|
||||
expires_at: Some(Utc::now()),
|
||||
expired: false,
|
||||
multiple: false,
|
||||
votes_count: 0,
|
||||
voters_count: None,
|
||||
options: vec![
|
||||
PollOption {
|
||||
title: "Je suis beaucoup trop long comme option, tronque-moi !".to_string(),
|
||||
votes_count: None,
|
||||
},
|
||||
PollOption {
|
||||
title: "nope".to_string(),
|
||||
votes_count: None,
|
||||
},
|
||||
],
|
||||
voted: None,
|
||||
emojis: vec![],
|
||||
};
|
||||
|
||||
let tweet_poll_res = transform_poll(&poll);
|
||||
let tweet_pool_expected = TweetPoll {
|
||||
duration_minutes: 0,
|
||||
options: vec!["Je suis beaucoup trop lon".to_string(), "nope".to_string()],
|
||||
};
|
||||
|
||||
assert_eq!(tweet_poll_res.options, tweet_pool_expected.options);
|
||||
}
|
||||
}
|
||||
171
src/utils.rs
171
src/utils.rs
@@ -1,13 +1,14 @@
|
||||
use atrium_api::{app::bsky::embed::defs::AspectRatioData, types::Object};
|
||||
use html_escape::decode_html_entities;
|
||||
use megalodon::entities::status::Tag;
|
||||
use megalodon::entities::{attachment::MetaSub, status::Tag};
|
||||
use regex::Regex;
|
||||
use std::error::Error;
|
||||
use std::{error::Error, num::NonZeroU64};
|
||||
|
||||
/// Generate 2 contents out of 1 if that content is > 280 chars, None else
|
||||
/// Generate 2 contents out of 1 if that content is > 300 chars, None else
|
||||
pub fn generate_multi_tweets(content: &str) -> Option<(String, String)> {
|
||||
// Twitter webforms are utf-8 encoded, so we cannot count on len(), we don’t need
|
||||
// encode_utf16().count()
|
||||
if twitter_count(content) <= 280 {
|
||||
if twitter_count(content) <= 300 {
|
||||
return None;
|
||||
}
|
||||
|
||||
@@ -38,7 +39,13 @@ fn twitter_count(content: &str) -> usize {
|
||||
|
||||
for word in split_content {
|
||||
if word.starts_with("http://") || word.starts_with("https://") {
|
||||
count += 23;
|
||||
// It’s not that simple. Bsky adapts itself to the URL.
|
||||
// https://github.com -> 10 chars
|
||||
// https://github.com/ -> 10 chars
|
||||
// https://github.com/NVNTLabs -> 19 chars
|
||||
// https://github.com/NVNTLabs/ -> 20 chars
|
||||
// so taking the maximum here to simplify things
|
||||
count += 26;
|
||||
} else {
|
||||
count += word.chars().count();
|
||||
}
|
||||
@@ -47,10 +54,16 @@ fn twitter_count(content: &str) -> usize {
|
||||
count
|
||||
}
|
||||
|
||||
pub fn strip_everything(content: &str, tags: &Vec<Tag>) -> Result<String, Box<dyn Error>> {
|
||||
pub fn strip_everything(
|
||||
content: &str,
|
||||
tags: &Vec<Tag>,
|
||||
mastodon_base: &str,
|
||||
) -> Result<String, Box<dyn Error>> {
|
||||
let mut res = strip_html_tags(&content.replace("</p><p>", "\n\n").replace("<br />", "\n"));
|
||||
|
||||
strip_mastodon_tags(&mut res, tags).unwrap();
|
||||
strip_quote_header(&mut res, mastodon_base)?;
|
||||
|
||||
strip_mastodon_tags(&mut res, tags)?;
|
||||
|
||||
res = res.trim_end_matches('\n').trim_end_matches(' ').to_string();
|
||||
res = decode_html_entities(&res).to_string();
|
||||
@@ -58,6 +71,16 @@ pub fn strip_everything(content: &str, tags: &Vec<Tag>) -> Result<String, Box<dy
|
||||
Ok(res)
|
||||
}
|
||||
|
||||
fn strip_quote_header(content: &mut String, mastodon_base: &str) -> Result<(), Box<dyn Error>> {
|
||||
let re = Regex::new(&format!(
|
||||
r"^RE: {}\S+\n\n",
|
||||
mastodon_base.replace(".", r"\.")
|
||||
))?;
|
||||
*content = re.replace(content, "").to_string();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn strip_mastodon_tags(content: &mut String, tags: &Vec<Tag>) -> Result<(), Box<dyn Error>> {
|
||||
for tag in tags {
|
||||
let re = Regex::new(&format!("(?i)(#{} ?)", &tag.name))?;
|
||||
@@ -88,10 +111,119 @@ fn strip_html_tags(input: &str) -> String {
|
||||
data
|
||||
}
|
||||
|
||||
pub fn convert_aspect_ratio(m: &Option<MetaSub>) -> Option<Object<AspectRatioData>> {
|
||||
match m {
|
||||
Some(ms) => {
|
||||
if ms.height.is_some_and(|x| x > 0) && ms.width.is_some_and(|x| x > 0) {
|
||||
Some(
|
||||
AspectRatioData {
|
||||
// unwrap is safe here
|
||||
height: NonZeroU64::new(ms.height.unwrap().into()).unwrap(),
|
||||
width: NonZeroU64::new(ms.width.unwrap().into()).unwrap(),
|
||||
}
|
||||
.into(),
|
||||
)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
None => None,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_convert_aspect_ratio() {
|
||||
// test None orig aspect ratio
|
||||
let metasub: Option<MetaSub> = None;
|
||||
|
||||
let result = convert_aspect_ratio(&metasub);
|
||||
|
||||
assert_eq!(result, None);
|
||||
|
||||
// test complet with image
|
||||
let metasub = Some(MetaSub {
|
||||
width: Some(1920),
|
||||
height: Some(1080),
|
||||
size: Some(String::from("1920x1080")),
|
||||
aspect: Some(1.7777777777777777),
|
||||
frame_rate: None,
|
||||
duration: None,
|
||||
bitrate: None,
|
||||
});
|
||||
|
||||
let expected_result = Some(
|
||||
AspectRatioData {
|
||||
height: NonZeroU64::new(1080).unwrap(),
|
||||
width: NonZeroU64::new(1920).unwrap(),
|
||||
}
|
||||
.into(),
|
||||
);
|
||||
|
||||
let result = convert_aspect_ratio(&metasub);
|
||||
|
||||
assert_eq!(result, expected_result);
|
||||
|
||||
// test complete with video
|
||||
let metasub = Some(MetaSub {
|
||||
width: Some(500),
|
||||
height: Some(278),
|
||||
size: None,
|
||||
aspect: None,
|
||||
frame_rate: Some(String::from("10/1")),
|
||||
duration: Some(0.9),
|
||||
bitrate: Some(973191),
|
||||
});
|
||||
|
||||
let expected_result = Some(
|
||||
AspectRatioData {
|
||||
height: NonZeroU64::new(278).unwrap(),
|
||||
width: NonZeroU64::new(500).unwrap(),
|
||||
}
|
||||
.into(),
|
||||
);
|
||||
|
||||
let result = convert_aspect_ratio(&metasub);
|
||||
|
||||
assert_eq!(result, expected_result);
|
||||
|
||||
/* test broken shit
|
||||
* that should never happened but you never know
|
||||
*/
|
||||
// zero width
|
||||
let metasub = Some(MetaSub {
|
||||
width: Some(0),
|
||||
height: Some(278),
|
||||
size: None,
|
||||
aspect: None,
|
||||
frame_rate: Some(String::from("10/1")),
|
||||
duration: Some(0.9),
|
||||
bitrate: Some(973191),
|
||||
});
|
||||
|
||||
let result = convert_aspect_ratio(&metasub);
|
||||
|
||||
assert_eq!(result, None);
|
||||
|
||||
// None height
|
||||
let metasub = Some(MetaSub {
|
||||
width: Some(500),
|
||||
height: None,
|
||||
size: None,
|
||||
aspect: None,
|
||||
frame_rate: Some(String::from("10/1")),
|
||||
duration: Some(0.9),
|
||||
bitrate: Some(973191),
|
||||
});
|
||||
|
||||
let result = convert_aspect_ratio(&metasub);
|
||||
|
||||
assert_eq!(result, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_twitter_count() {
|
||||
let content = "tamerelol?! 🐵";
|
||||
@@ -100,11 +232,11 @@ mod tests {
|
||||
|
||||
let content = "Shoot out to https://y.ml/ !";
|
||||
|
||||
assert_eq!(twitter_count(content), 38);
|
||||
assert_eq!(twitter_count(content), 41);
|
||||
|
||||
let content = "this is the link https://www.google.com/tamerelol/youpi/tonperemdr/tarace.html if you like! What if I shit a final";
|
||||
|
||||
assert_eq!(twitter_count(content), 76);
|
||||
assert_eq!(twitter_count(content), 79);
|
||||
|
||||
let content = "multi ple space";
|
||||
|
||||
@@ -112,7 +244,7 @@ mod tests {
|
||||
|
||||
let content = "This link is LEEEEET\n\nhttps://www.factornews.com/actualites/ca-sent-le-sapin-pour-free-radical-design-49985.html";
|
||||
|
||||
assert_eq!(twitter_count(content), 45);
|
||||
assert_eq!(twitter_count(content), 48);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -131,6 +263,13 @@ mod tests {
|
||||
let youpi = generate_multi_tweets(&tweet_content);
|
||||
|
||||
assert_eq!(None, youpi);
|
||||
|
||||
// test with 299 chars
|
||||
let tweet_content = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate vulver amico tio".to_string();
|
||||
|
||||
let youpi = generate_multi_tweets(&tweet_content);
|
||||
|
||||
assert_eq!(None, youpi);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -173,9 +312,19 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_strip_everything() {
|
||||
// a classic toot
|
||||
let content = "<p>Ce soir à 21h, c'est le Dojobar ! Au programme ce soir, une rétrospective sur la série Mario & Luigi.<br />Comme d'hab, le Twitch sera ici : <a href=\"https://twitch.tv/nintendojofr\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">twitch.tv/nintendojofr</span><span class=\"invisible\"></span></a><br />Ou juste l'audio là : <a href=\"https://nintendojo.fr/dojobar\" target=\"_blank\" rel=\"nofollow noopener noreferrer\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"\">nintendojo.fr/dojobar</span><span class=\"invisible\"></span></a><br />A toute !</p>";
|
||||
let expected_result = "Ce soir à 21h, c'est le Dojobar ! Au programme ce soir, une rétrospective sur la série Mario & Luigi.\nComme d'hab, le Twitch sera ici : https://twitch.tv/nintendojofr\nOu juste l'audio là : https://nintendojo.fr/dojobar\nA toute !".to_string();
|
||||
let result = strip_everything(content, &vec![]).unwrap();
|
||||
let result = strip_everything(content, &vec![], "https://m.nintendojo.fr").unwrap();
|
||||
|
||||
assert_eq!(result, expected_result);
|
||||
|
||||
// a quoted toot
|
||||
let content = "<p class=\"quote-inline\">RE: <a href=\"https://m.nintendojo.fr/@nintendojofr/115446347351491651\" target=\"_blank\" rel=\"nofollow noopener\" translate=\"no\"><span class=\"invisible\">https://</span><span class=\"ellipsis\">m.nintendojo.fr/@nintendojofr/</span><span class=\"invisible\">115446347351491651</span></a></p><p>Assassin’s Creed Shadows pèsera environ 62,8 Go sur Switch 2 (et un peu plus de 100 Go sur les autres supports), soit tout juste pour rentrer sur une cartouche de 64 Go.</p><p>Ou pas, pour rappel…</p><p><a href=\"https://m.nintendojo.fr/tags/AssassinsCreedShadows\" class=\"mention hashtag\" rel=\"tag\">#<span>AssassinsCreedShadows</span></a> <a href=\"https://m.nintendojo.fr/tags/Ubisoft\" class=\"mention hashtag\" rel=\"tag\">#<span>Ubisoft</span></a> <a href=\"https://m.nintendojo.fr/tags/NintendoSwitch2\" class=\"mention hashtag\" rel=\"tag\">#<span>NintendoSwitch2</span></a></p>";
|
||||
|
||||
let expected_result = "Assassin’s Creed Shadows pèsera environ 62,8 Go sur Switch 2 (et un peu plus de 100 Go sur les autres supports), soit tout juste pour rentrer sur une cartouche de 64 Go.\n\nOu pas, pour rappel…\n\n#AssassinsCreedShadows #Ubisoft #NintendoSwitch2";
|
||||
|
||||
let result = strip_everything(content, &vec![], "https://m.nintendojo.fr").unwrap();
|
||||
|
||||
assert_eq!(result, expected_result);
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user